top of page
Search

The Carbon Emission & Energy Emission Tool (CEET) Case: AI Classification in Practice

  • Writer: Gabriele Modica
    Gabriele Modica
  • 5 days ago
  • 7 min read
ree


Section 1: EU AI Act Foundation, Definition, and classification


1.       EU AI Act Foundation

The EU Artificial Intelligence Act is the world's first comprehensive legal framework for regulating AI, effective 1st August 2026. The EU AI Act affects any organization deploying AI systems in Europe or serving European customers. For these organizations, compliance isn't optional. The highest penalties, which is reserved for Prohibited Practices, reach €35 million or 7% of global turnover. Adopting High-risks AI system without the proper compliance (risk management, human oversight, documentation) will trigger fines up to €15 million or 3% of global turnover. While green-light systems have minimal requirements, systematic non-compliance with basic transparency obligations can still trigger substantial penalties. €7.5 million or 1.5% of turnover for systematic violations including incomplete documentation, false information, or failure to inform users of AI interaction in low-risk systems.

Penalties scale with company revenue, making large organizations face potentially billions in fines. For startups, percentage-based fines ensure proportional consequences while still maintaining deterrent effect across all market participants.


2.       AI System Definition & Identification

The EU AI Act identifies five critical indicators that distinguish AI Systems from traditional software: 1) Adaptability - systems that can modify behavior based on new data or experiences after deployment. 2) Autonomy - capability to perform tasks with minimal human intervention during operation. 3) Complexity - ability to handle multifaceted problems, large datasets, or intricate decision-making processes. 4) Unpredictability - outputs may vary based on learning, even with identical inputs. 5) Data Dependency - heavy reliance on substantial amounts of data for training, learning, or operational functionality. Systems exhibiting these characteristics qualify as AI Systems requiring regulatory consideration under the Act.


3.       Risk classification Framework

The EU AI Act establishes three distinct risk Categories. Prohibited: AI systems posing unacceptable risks to fundamental rights and democratic values. These are systems that manipulate, exploit, or discriminate. Examples: Social scoring by governments, workplace emotion recognition, subliminal manipulation techniques, biometric Categorization inferring political beliefs or sexual orientation. High-Risk: Systems with significant impact on health, safety, or rights, such as employment decisions, biometric identification, and critical infrastructure management. These require strict compliance including human oversight and risk management systems. Examples: Employment recruitment/termination systems, credit scoring algorithms, educational assessment tools, biometric identification, critical infrastructure control systems. Low-Risk: General-purpose systems with minimal individual impact and appropriate human oversight. Examples: Customer service chatbots, content recommendation engines, basic analytics tools, simple automation systems.

Key Principle: classification depends on use case and potential impact, not just technology type—the same AI model could fall into different Categories based on its application.


Section 2: The CEET Case Study and Scenario Analysis

ree

One Organization that we keep anonymous here, requested us to classify one of their AI Systems. They are smart as they involved us during the ideation phase, prior the development of the AI system itself. Do you know that designing AI governance first save from 6 weeks to 6 months comparing to retrofitting compliance? You should consider assessing compliance right now. But first let’s learn what is this AI system about and how it is classified according with the EU AI Act.


4. Case Study: Carbon Emission & Energy Emission Tool (CEET) System Overview

The Carbon Emission & Energy Emission Tool (CEET) is an automated tool that infers energy and carbon emissions across a portfolio of pre-defined buildings’ types. Firstly, a specialized software simulates energy and carbon emission of more than 10 million simplified building models. Then the tool leverages these data points to train the model on inferring the energy and carbon emission of the desired building type. CEET enables users to quantify operational and embodied carbon impacts for 16 representative building types across global climate zones by adjusting 30 building design and mechanical characteristics through an intuitive interface.


CEET Example: Why it qualifies as an AI System

The Carbon Emission & Energy Emission Tool meets the EU AI Act definition of an AI System. It uses neural networks trained on million energy models, demonstrating adaptability and data dependency. The platform operates with autonomy, automatically generating carbon calculations from building parameter inputs without constant human intervention. It exhibits complexity by processing multifaceted building design variables and unpredictability through learned inference patterns. Most importantly, CEET infers from inputs (building characteristics) to generate outputs (energy predictions and carbon estimates) that influence decision-making in physical environments. This is fully aligned with the core definition of an AI System under the EU AI Act.


Classification Analysis and Reasoning

CEET qualifies as a Low-Risk AI System under the EU AI Act. The classification analysis reveals no prohibited practice indicators: CEET doesn't involve emotion recognition, biometric Categorization, social scoring, or vulnerability exploitation. It also lacks high-risk indicators such as employment decision-making, biometric identification, critical infrastructure safety operations, or autonomous navigation. Instead, CEET serves building design analytics with appropriate human oversight, functioning as a "rapid decision-making tool" that documents its limitations. The platform has no potential for harm to individual rights or safety, focusing purely on carbon modeling for architectural applications. This straightforward use case fits squarely within low-risk parameters, requiring only basic transparency compliance.


How Low-Risk AI Systems Can Trigger Penalties: Practical Examples

Even low-risk systems face substantial penalties through systematic transparency violations. Let’s delve into how CEET could face this challenge, if designed differently. Consider the application of CEET without informing the users of the outcome, such as architects, engineers, or facility managers that an AI system produced the output. The users, instead would assume that human experts created and validated the analysis. This situation creates systematic transparency violations across the entire user base. The company could face €7.5 million or 1.5% of global turnover penalties for this seemingly simple oversight. Another potential scenario could involve misleading marketing where the platform could be promoted as "expert human analysis" while actually using neural networks, systematically deceiving customers about AI involvement. Third-party integrations present additional risks when the platform feeds data into partner construction software without disclosing AI origins, creating downstream transparency violations affecting thousands of end-users. Public-facing sustainability calculators powered by the system without clear "This analysis uses AI" disclaimers could trigger maximum penalties across millions of interactions. The critical insight is that scale amplifies simple violations into systematic non-compliance, transforming minor oversights into substantial regulatory exposure.

  

5. Scenario Analysis

Now, let’s analyses what modifications to CEET would elevate it to high-risk status under the EU AI Act.


Employment Applications: The key change would be CEET shifting from being a design analysis tool to becoming an automated employee evaluation system that directly impacts employment decisions. If CEET were modified to automatically evaluate and rank building designers, architects, or engineers based on the carbon performance of their building designs, and these AI-generated performance scores were used to make hiring, promotion, or termination decisions about these professionals, then CEET would become a high-risk employment decision system.


Critical Infrastructure Safety: The key change would be CEET shifting from carbon analysis to real-time operational control of critical building systems. If CEET were modified to automatically manage essential infrastructure like emergency power distribution, fire safety protocols, or life-support HVAC systems in hospitals or data centers based on its energy predictions, it would become a high-risk critical infrastructure system requiring strict safety controls and human oversight.


Access to Essential Services: The key change would be CEET shifting from design analytics to facility access decisions. If CEET were modified to automatically determine which delivery drivers, contractors, or employees can access specific company facilities based on predicted carbon impact of their visits, or to automatically allocate warehouse space to third-party sellers based on their predicted energy consumption profiles, it would become a high-risk system affecting access to essential business services with potential for discriminatory impacts.


Biometric Integration: The key change would be connecting CEET (currently low-risk) with biometric identification systems (inherently high-risk under EU AI Act). If CEET were integrated with facial recognition or fingerprint sensors to track individual employee energy usage and make automated workplace decisions based on personal identification data, the combined system would be classified as high-risk due to the biometric component. When low-risk AI systems are integrated with high-risk technologies, the overall system classification elevates to the highest risk level present.


Business impact and compliance requirements

High-risk AI systems must comply with two sets of requirements. One set of three additional obligations: i) implement robust risk management systems, ii) ensure continuous human monitoring with meaningful human control over automated decisions, and iii) conduct fundamental rights impact assessments before deployment.

All AI systems (including high-risk) must also follow three standard requirements: 1) use good training data that's been tested for bias, 2) ensure third-party bodies produce conformity assessments, and 3) always tell people when they're using AI. This means high-risk AI systems must comply with all six obligations total.

Compliance Requirements Create Two Types of Impact. Standard AI systems face three consequences: Development timelines extend 6-12 months for compliance implementation, operational costs increase due to ongoing monitoring requirements, and market entry may be delayed pending regulatory approval.

High-risk AI systems face additional severe impacts: Legal liability exposure rises substantially, implementation costs reach millions annually, and human operators must maintain 24/7 override capabilities with specialized training protocols. However, compliant systems gain competitive advantage through enhanced trust and regulatory alignment in global markets.



Section 3: The Modica.ai Competitive Framework


ree


Executives are tired of receiving only "here is the law" when reaching out to traditional auditing and legal firms. The ugly reality is that high-value AI projects are stuck in compliance paralysis - frozen by the fear of fines up to €35 Million. This inertia sidelines innovation.


Modica.ai transforms this paralysis into competitive advantage through our proprietary AI Traffic Light™ framework, developed by Gabriele Modica.

Our AI Traffic Light™ system Categorizes AI systems based on their potential impact on human rights, safety, and society:


Red Light (Prohibited): Complete stop - systems like social scoring or workplace emotion recognition are banned entirely

Amber Light (High-Risk): Proceed with extreme caution - employment decision tools or biometric identification require strict safety measures, human oversight, and regulatory approval

Green Light (Low-Risk): Proceed with basic safeguards - systems like CEET's carbon analytics need only transparency requirements.


This intuitive framework helps organizations quickly assess compliance obligations, making complex regulations accessible for business decision-making.


The Modica.ai 5-Step Readiness Roadmap

Step 1 - Risk Classification & Inventory: Rapid portfolio assessment to determine which systems are Prohibited (RED), High-Risk (AMBER), or immediately safe to scale (GREEN).

Step 2 - Compliance Gap Analysis: Evidence-based measurement of documentation, data quality, and human oversight processes against EU AI Act requirements.

Step 3 - AI Governance: Immediately fence low-risk (GREEN) systems, allowing fast deployment while managing the few high-risk systems.

Step 4 - Remediation Guardrails: Design and embed mandatory controls into workflows for High-Risk (AMBER) systems, building auditable proof for regulatory approval.

Step 5 - Deployment Blueprint: Final blueprint for successful conformity assessment, transforming compliant systems into strategic trust-assets.


Why This Creates Competitive Advantage

As AI democratizes software creation, success is no longer about coding fastest, but governing smartest. Organizations with robust governance frameworks move faster and safer, creating operational excellence competitors cannot replicate. Early adopters gain first-mover advantage in trust and regulatory alignment worldwide.

 
 
 

Comments


  • Grey Twitter Icon
  • Grey LinkedIn Icon
  • Grey Facebook Icon

© 2025 by Gabriele Modica.

bottom of page