In 2025, companies continued to grapple with several key challenges in secure development. The advancement of AI and generative AI has fundamentally changed the threat landscape and, as a result, has significantly impacted secure development practices.
One of the key issues is the increasing complexity of AI-based attacks, so it is critical for development teams to integrate robust security measures directly into their workflows. In addition, ensuring the security of AI systems throughout their lifecycle remains a critical challenge. This includes not only the secure development of AI solutions, but also protecting models, particularly LLMs, from vulnerabilities such as data poisoning or prompt injection. At the same time, traditional security measures, such as monitoring, logging, and intrusion detection, also remain necessary for effective AI system management.
Supply chain attacks remain a significant threat. Compromising software components, whether open source or commercial, can have critical consequences. Organisations must prioritise the management and monitoring of software supply chain risks, including the use of Software Bills of Materials and careful patch management.
The proliferation of cybersecurity regulatory requirements adds an additional layer of complexity. Organisations have to operate in a fragmented environment of regional and global standards, making it difficult to ensure compliance and security of development processes simultaneously.