Engineering and Cybersecurity Ethics in the Age of AIAdrian Bermudez
- gumpulanatasha
- 7 hours ago
- 4 min read
Article written by: Adrian
Article designed by: Adrian and Natasha Gumpula
The Engineering Foundations of AI Systems

Twenty-first century artificial intelligence (AI) systems hinge on complex engineering infrastructures that combine large-scale data processing, machine learning algorithms, and specialized computing hardware. The development of modern AI typically follows a structured pipeline, including data collection, preprocessing, model training, validation, and deployment within distributed computing environments. These systems are frequently supported by high performance data centers designed specifically for AI workloads. Here, graphical processing units/other accelerators enable large neural networks to be trained on massive datasets. As AI technologies become integrated into industries such as finance, healthcare, and cybersecurity, the underlying engineering systems responsible for training/operating these models play an increasingly significant role in shaping their behavior and impact on our society.
The ethical implications of AI often originate from decisions made during the engineering design process: dataset selection, model architecture, and optimization objectives; these factors can introduce unintended consequences that influence how automated systems behave in real world applications. In particular, researchers have noted that training data can reflect historical/societal inequalities, which may then be reproduced by machine learning models. As scholars have observed, “structural inequalities in society are reflected in the data used to train predictive models and in the design of objective functions” (Kuhlman, 2020). This observation highlights a critical challenge for engineers: technical design decisions are not neutral. The way an AI system is engineered can influence fairness, accuracy, and the potential for harm, making ethical awareness an essential component of responsible system development.
Cybersecurity Risks Introduced by Artificial Intelligence
Artificial intelligence has introduced powerful tools for automation and data analysis, simultaneously creating new challenges within the cybersecurity landscape. The same capabilities that allow machine learning systems to analyze large datasets/identify patterns can also be exploited by malicious players. Attackers are increasingly using AI to automate phishing campaigns, generate highly convincing social engineering content, and produce synthetic media capable of impersonating individuals or organizations. These developments have expanded the scale and sophistication of cyber threats. Recent research describes this trend as a shift toward AI-enabled cybercrime, identifying emerging threats such as “deepfakes and synthetic media, adversarial AI attacks, automated malware, and AI powered social engineering” (Erukude, 2026).
A particularly concerning category of threats involves adversarial machine learning, where attackers deliberately manipulate the inputs/training data of an AI model in order to alter its output. These attacks can cause AI systems to misclassify data, overlook malicious activity, or produce misleading results. According to the National Institute of Standards and Technology, AI systems may be vulnerable to attacks that are designed specifically to manipulate their behavior, demonstrating that “machine learning systems can be intentionally misled by carefully crafted inputs or training data” (NIST, 2024). Such vulnerabilities raise important questions about the reliability of AI-driven cybersecurity tools and highlight the need for stronger safeguards that address risks unique to machine learning technologies.

Ethical Responsibilities of Engineers Designing AI
The growing influence of artificial intelligence has intensified discussions surrounding the ethical responsibilities of engineers who design/deploy these systems. AI technologies often operate autonomously and may influence decisions that affect individuals, institutions, and public systems. Because of this expanded impact, engineering practice must extend beyond technical performance and incorporate broader considerations such as transparency, fairness, and accountability. Professional engineering organizations have long emphasized the responsibility of engineers to prioritize public safety and welfare, and these principles are increasingly relevant in the context of AI development.
However, translating ethical principles into practical engineering decisions remains a complex challenge. Research on AI ethics frameworks suggests that technical guidelines alone may not be sufficient to ensure responsible system design. According to Wong, Madaio, and Merrill (2022), many existing AI ethics toolkits attempt to guide developers toward ethical practices but often struggle to address the broader organizational and institutional factors that shape technological development. As the authors explain, ethical toolkits frequently “focus on individual developer behavior rather than the institutional structures that influence technological decision making.” This insight suggests that ethical AI development requires not only responsible engineers but also supportive organizational cultures and governance structures that prioritize long term societal considerations.
Building Secure & Responsible AI Systems
Addressing the ethical and cybersecurity challenges associated with artificial intelligence requires a combination of technical innovation and institutional oversight. Researchers and industry professionals have proposed a variety of approaches designed to improve the safety and transparency of AI systems. These include explainable AI techniques that allow developers to better understand how models make decisions, secure model training methods that reduce vulnerability to data poisoning attacks, and continuous monitoring systems that detect anomalies during deployment. When implemented effectively, these tools can help ensure that AI systems remain reliable even when operating in complex or adversarial environments.
At the same time, technological safeguards must be complemented by governance frameworks that promote accountability and collaboration. The rapid pace of AI development means that no single discipline can fully address its risks. Cybersecurity professionals, engineers, policymakers, and academic researchers must work together to identify emerging threats and establish standards for responsible system development. As noted in cybersecurity research, effective responses to AI driven threats depend on ongoing cooperation between technology developers and security professionals who can anticipate and counter evolving attack methods (ISACA, 2025). Overarching technology organizations, as they combine ethical engineering practices with collaborative oversight, will be able to better balance the benefits of AI with the responsibility to protect the systems and the communities that depend on it.
Sources
"Addressing the Rise of AI-Driven Cyberattacks." ISACA Journal, vol. 1, 2025, www.isaca.org/resources/isaca-journal/issues/2025/volume-1/addressing-the-rise-of-ai-driven-cyberattacks.
Erukude, Samuel T., et al. "AI Driven Cybersecurity Threats: A Survey of Emerging Risks and Defensive Strategies." arXiv, 7 Jan. 2026, arxiv.org/abs/2601.03304.
Kuhlman, Caitlyn, et al. "No Computation Without Representation: Avoiding Data and Algorithm Biases Through Diversity." arXiv, 25 Feb. 2020, arxiv.org/abs/2002.11836.
National Institute of Standards and Technology. "NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems." NIST, 4 Jan. 2024, www.nist.gov/news-events/news/2024/01/nist-identifies-types-cyberattacks-manipulate-behavior-ai-systems.
"PowerEdge AI Servers." Dell Technologies, 2024, www.dell.com.
Thoma, Ben. "AI Data Centers Threaten to Derail Climate Progress in Indiana." WFYI Public Media, 13 June 2024, www.wfyi.org/news/articles/ai-data-centers-threaten-to-derail-climate-progress-in-indiana.
Wong, Richmond Y., et al. "Seeing Like a Toolkit: How Toolkits Envision the Work of AI Ethics." arXiv, 17 Feb. 2022, arxiv.org/abs/2202.08792.




Comments