In Bonn, where the Bundesnetzagentur has long been associated with spectrum auctions, telecom oversight, and the invisible plumbing of a digitized economy, a different kind of power is now being prepared. Not the power to connect signals, but the power to judge systems that increasingly shape the chances of citizens before a human being ever enters the room. A chatbot that misleads a job applicant. A scoring tool that helps decide whether a family can access credit. A biometric system that sits too close to policing, too close to the border, too close to the old European fear that administration can become machinery.
This month, Germany’s federal government advanced the draft of its AI Market Surveillance and Innovation Promotion Act — the KI-MIG — the national law meant to implement the supervisory architecture around the EU AI Act. On paper, this may sound technical: agency competences, complaint routes, sandboxes, sanctions, coordination centers. In practice, it is one of the most important questions in European AI governance: when the law says that dignity, safety, and fundamental rights must be protected, who will actually do the protecting?
The answer Germany is proposing is revealing. It is not building a wholly new AI ministry. It is not handing the field entirely to privacy regulators. Instead, it is constructing a hybrid system around an existing federal authority — the Bundesnetzagentur — while leaving important sector-specific powers with specialized regulators and carving out exceptional areas, including financial supervision and parts of the media sphere. This is classic German statecraft: institutional layering rather than institutional rupture. But in the age of AI, layering is not enough on its own. The deeper question is whether this architecture can keep the promise embedded in Europe’s legal tradition: that power affecting the person must remain contestable, reviewable, and bound by law.
Context: From Brussels Text to German Administration
The EU AI Act was never going to live or die by its recitals alone. Brussels can define prohibited practices, classify high-risk systems, and articulate obligations for providers and deployers. But enforcement is always local before it becomes European. Someone must receive complaints. Someone must interpret edge cases. Someone must decide whether a model used in a hiring pipeline, a public service portal, or a law-enforcement workflow falls within a rule that was politically agreed in one setting and operationalized in another.
That is the passage Germany is now entering. According to reporting and legal analyses published in recent weeks, the draft KI-MIG designates the Bundesnetzagentur as the default market surveillance authority, the national single point of contact for the EU AI Office, and the central complaints office. Within that structure, Germany would also create a coordination and competence center — KoKIVO — to concentrate interpretive expertise. For some especially sensitive high-risk uses, including systems linked to law enforcement, migration, asylum, border management, justice, and democratic processes, an independent internal chamber is meant to provide an extra layer of separation.
There is logic here. Germany does not have infinite numbers of AI specialists sitting idle in public service. It has federal and Länder competencies that already overlap awkwardly in digital governance. It has regulators with deep knowledge of products, finance, cybersecurity, media, and administrative law — but not necessarily of machine-learning systems as such. Centralization promises coherence. Sectoral specialization promises competence. The draft tries to have both.
It also promises innovation support rather than mere prohibition. The Bundesnetzagentur is expected to operate at least one regulatory sandbox, with priority access for SMEs, start-ups, and research institutions. That detail matters. Germany’s Mittelstand cannot absorb AI compliance as if it were a rounding error. If the AI Act is to avoid becoming another regime that entrenches those with the largest legal departments, then implementation has to do more than threaten sanctions. It has to lower the cost of understanding the rules before a small company discovers them in the form of an enforcement notice.
Analysis: The State Is Finally Choosing a Face
For all the talk about “AI regulation,” Europe’s real problem has never been the absence of principles. It has been the institutional translation of those principles into enforceable practice. The KI-MIG matters because it gives German AI oversight a face, an address, and eventually a file number. That may sound banal. It is not. In a Rechtsstaat, rights are not protected by aspiration alone. They are protected when a person knows where to bring a complaint, which authority must answer, what procedure governs the answer, and how that answer can be challenged.
In this sense, Germany’s draft is stronger than a looser, more rhetorical implementation would have been. A single point of contact reduces the danger that AI oversight dissolves into bureaucratic hide-and-seek. A centralized complaints office acknowledges a reality citizens already experience: the harms of AI are often obvious to the affected person but institutionally ambiguous to the state. The applicant denied, the worker scored, the traveller flagged, the student profiled — all may suspect that an automated system played a role without knowing which regulator has jurisdiction. If the state’s answer is “please determine the correct authority yourself,” then the formal existence of rights becomes a procedural illusion.
And yet there is also reason for caution. Centralization is a governance tool, not a moral guarantee. The Bundesnetzagentur is experienced, technically serious, and accustomed to complex markets. But AI oversight is not just a market-order question. It is also a fundamental-rights question. Systems governed under the AI Act do not merely sell products; they shape opportunities, classifications, access, suspicion, and exclusion. A state can become highly efficient at supervising markets while remaining too thinly equipped to recognize how quickly market logics bleed into civic life.
This is why the independent chamber proposed for sensitive use cases deserves close attention. Germany appears to understand that certain AI systems — especially those touching law enforcement, migration, and democratic processes — cannot be treated as ordinary administrative objects. They require distance from routine executive incentives. They require heightened scrutiny because the history of Europe teaches, with unusual severity, that when state systems begin sorting human beings at scale, technical functionality is never the only relevant measure.
There is another tension inside the bill as well: promotion and surveillance are being housed in the same overall architecture. The law is not only a market surveillance act; it is also an innovation promotion act. This is politically understandable. Berlin wants to reassure firms that implementation will not become an anti-industrial project. It wants to show that administrative capacity can support adoption as well as discipline abuse. But there is always a constitutional unease when the same structure is asked to accelerate deployment and police harm. The goals are not mutually exclusive, but they are not identical either. The risk is subtle: guidance can become cheerleading; sandbox culture can drift into regulatory intimacy; the desire to keep Germany competitive can soften the state’s willingness to say no.
That is where the deeper European principle must re-enter the room: Menschenwürde. Human dignity is not merely another factor to be balanced against productivity. In the German constitutional tradition, it comes first because the person must not become raw material for administrative optimization. The AI Act often speaks in the language of risk management, conformity assessment, and documentation. Necessary language, yes. But dignity is not reducible to risk scoring. A person’s encounter with an opaque automated system can be lawful on paper and still degrading in practice if there is no meaningful explanation, no avenue of challenge, no accountable human institution standing behind the outcome.
This is why Germany’s implementation moment matters beyond Germany. The country is effectively testing whether Europe’s first comprehensive AI law can be domesticated into a form of administration that remains recognizably democratic. Not merely digitized. Not merely efficient. Democratic. That means contestability over convenience. Traceability over abstraction. Administrative humility over technological inevitability.
House Reflection: Europe’s AI Problem Is Becoming an Administrative Problem
There is a temptation, especially outside Europe, to treat the AI Act as a grand symbolic gesture — a regulatory monument built in Brussels while innovation happens elsewhere. That reading misses something essential. Europe’s wager has never been that law alone will make technology good. Its wager is that power without legal structure becomes arbitrary faster than its champions admit. The German draft shows the wager entering its hardest phase: not proclamation, but institution-building.
And institution-building is where values become visible. Which harms receive dedicated scrutiny? Which offices are funded? Which complaints channels are legible to ordinary people? Which small firms receive help early, and which are left to discover the rules through fear? Which sensitive sectors are recognized as constitutionally exceptional? These are not secondary questions. They are the practical body of the ethical project.
For the House of 7, this is the deeper lesson. AI governance is often described as a debate between innovation and safety. In Europe, that is too shallow. The more precise debate is between systems that remain answerable to the person and systems that dissolve responsibility across code, vendors, agencies, and procurement chains until no one can say, with honesty, who is accountable. Germany’s KI-MIG is an attempt to resist that dissolution by assigning competence before the August 2026 deadlines arrive in full force.
Whether it succeeds will depend less on statutory elegance than on administrative courage. The state must be willing to investigate not only obvious abuses but also normalized convenience. It must be willing to tell public authorities and private actors alike that automation does not erase the duty to justify. It must be willing to remember that “innovation-friendly” is not a synonym for “frictionless,” especially when the friction being removed is the human right to ask: on what basis was I judged?
Closing Question
Germany is now giving the EU AI Act a national machinery. The question is whether that machinery will function as a shield for the person or simply as a more orderly way of processing them. As Europe moves from AI principles to AI administration, can the state build institutions that do not merely supervise systems — but keep human dignity visibly, procedurally, and enforceably at the center of the file?
Leave a Reply