How to thrive in an AI-powered workplace

How Will Your Role Work with AI?  Author: Hugo Farinha, co-founder of Virtuoso QA Integrating AI sensitively into the workplace while acknowledging the perceptions, negative and positive, around the impact AI may have is going to be an essential skill for the C-suite over the next few years. The landscape will shift, putting greater emphasis on certain skills that will be needed to cover the requirements of working alongside AI, and others better suited for areas where human involvement is still essential…   The immediate impact of AI will be the automation of administrative tasks, leading to a reduction in entry-level roles. Over the long term, this will shift in focus towards more strategic, analytical, and customer-facing positions as AI takes over routine tasks. The job market is going to evolve alongside AI advancements, requiring ongoing adaptation and skill acquisition to stay relevant. Skills such as empathy, Communication and negotiation will remain essential skills, serving as key differentiators in effectively achieving objectives with both humans and machines. In this new era, where machines are trained to understand human language and its nuances, these skills take on even greater importance. Nuances in language—such as tone, sentiment, context, and implied meaning—enable the transfer of deeper human emotions and intentions. These subtleties are not just critical for human collaboration but also for guiding machines to interpret instructions, respond appropriately, and deliver optimal results aligned with human expectations. Understanding AI tools and data analysis will be increasingly important, even for non-technical roles and as AI becomes more integrated, the need for professionals who understand the ethical implications and regulatory requirements will grow. Outside of testing, AI is already poised to take over decision-making tasks across many industries, from financial market trading to human resources, the legal sector and healthcare.  So, what are the kinds of roles within software and testing that we’ll see being offered over the next few years?  Agentic AI Workflow Designer  An Agentic AI Workflow Designer will implement dynamic testing workflows using Agentic AI and enable adaptive testing based on system behaviour and conversational machine-to-machine problem-solving. Rather than rigid, predefined workflows, this role will improve efficiency by optimising test paths in real time and reducing redundancies, ensuring tests are always aligned with the evolving needs of the project.  AI Interaction and Integration Designer The AI Interaction and Integration Designer evolves the traditional UI/UX designer role by focusing on creating seamless, collaborative experiences between users and AI agents. This role emphasises designing end-to-end user journeys where AI serves as a proactive partner, sharing cognitive, creative, and logistical tasks. It requires crafting interactions that feel natural, empathetic, and personalised while ensuring AI integrates seamlessly across ecosystems. Balancing user control with AI autonomy, these designers prioritise transparency, ethical considerations, and adaptability, transforming static interfaces into dynamic, human-AI partnerships that enhance productivity and engagement. AI Model Validation Engineers AI Model Validation Engineers will validate AI models, ensuring their accuracy, fairness, and reliability. The AI aspect of this role addresses unique issues like model drift and bias, making the process more efficient by identifying problems early in the AI development lifecycle. AI Ethics Specialist  Ethics, governance and compliance are going to gain enormous value and importance to organisations. An AI Ethics Specialist will be required to ensure Agentic AI systems meet ethical standards like fairness and transparency.  This role will have to involve someone using specialised tools and frameworks to address ethical concerns efficiently and avoid potential legal or reputational risks.  Human oversight to ensure transparency and responsible ethics is essential to maintain the delicate balance between data-driven decisions, intelligence and intuition. Autonomous Testing Engineer  An Autonomous Testing Engineer will design fully autonomous testing systems powered by Agentic AI. Unlike manual or semi-automated testers, this role maximises efficiency by removing the need for human intervention in repetitive or regression testing, allowing teams to focus on more complex, exploratory tasks. AI-Driven Test Strategist  That leads us to the AI-Driven Test Strategist who uses AI to develop high-level strategies to identify critical testing areas and prioritise resources effectively. The results achieved by AI-Driven Test Strategists will be more efficient than traditional test managers. Traditional strategists will rely on experience and intuition but the AI Driven Test Strategist uses data-driven insights to optimise efforts and prioritise areas of highest risk or value.  AI Test Data Specialist  Traditional testers manually create or extract test data, but an AI Test Data Specialistwill design and manage synthetic test data using AI, ensuring realistic test scenarios while addressing privacy concerns. As a result, this tester can achieve greater efficiency by generating diverse datasets at scale, reducing the time spent on preparation and ensuring compliance with data protection regulations. Agentic AI Trainer and Configurator Agentic AI Trainers and Configurators will adapt to domain-specific requirements by creating AI-driven systems that dynamically adjust to new inputs and requirements. AI Bug Detector  AI will be used to predict potential bugs before they occur, focusing testing efforts on high-risk areas which reduces rework, shortens development cycles, and lowers costs, making an AI Bug Detector a hugely important role. Conversational Test Automation Engineer  Chatbots and voice assistants will be tested by a Conversational Test Automation Engineer using AI-driven tools for dynamic interaction validation. Traditional testers often struggle with the complexity and variability of conversational interfaces, but Agentic AI improves efficiency by automating testing across multiple scenarios and languages. Continuous AI Monitoring Specialist  A Continuous AI Monitoring Specialist will detect anomalies and performance issues in real time while monitoring AI systems in production. This position leverages AI for proactive issue detection and rapid incident response and minimises downtime. AI Lifecycle Manager  An AI Lifecycle Manager will be required to oversee the integration and lifecycle of AI systems in the SDLC and align development and testing efforts with evolving business needs. AI Overseer And finally, an AI Overseer. This role is going to involve monitoring the entire Agentic stack of agents and arbiters, the decision-making elements of AI.    AI integration is going to be an evolution as well as a revolution.  It has the potential to have more of an impact in a shorter

How AI can assess a study’s novelty and impact various industries

Innovative AI tools in scientific research In today’s rapidly evolving world, AI is not just transforming industries but also unlocking new opportunities for innovation across almost all sectors, including research. One such opportunity in scientific research is to create an objective method for evaluating the originality of research: an AI-powered tool for novelty scores. By assessing the novelty of studies, i.e. how surprising and innovative yet sensible these studies are, these scores enable industries such as pharmaceutical research & development to quickly identify and gauge areas for innovation more efficiently. This helps significantly speed up the traditional peer review processes. Professor Philipp Koellinger, Co-Founder and CEO of the preprint start-up DeSci Labs, explores how embracing AI in scientific publishing can accelerate progress and drive meaningful advancements across multiple industries. The Novelty Scores Tool can make it easier and quicker to filter and find research that is novel and innovative—and, therefore, more likely to make a difference. The traditional peer review process of many journals often prioritises articles that are viewed as novel or surprising. However, the process is not only frustratingly slow but also highly subjective and prone to biases. An objective measure can ensure that research is novel, unlocking the potential to drive meaningful advancements across the many sectors that rely on science. AI is a powerful tool to help streamline processes, making them more efficient and effective. By automating tasks and providing advanced analytical capabilities, AI can significantly reduce publication times, enabling faster dissemination of research findings. Additionally, AI can enhance the quality of research by offering deeper insights and identifying novel patterns that may otherwise go unnoticed. More generally, it can enhance the accessibility of research, potentially driving faster scientific progress and fostering innovation. Novelty scores and other AI tools can play a role in encouraging scientists to take higher risks in the research questions they tackle, to think outside the box, and to do unusual things that can lead to breakthroughs. These breakthroughs for science would also help impact industries, many of which are in high need of innovation to solve pertinent global issues. Such industries include pharmaceuticals, energy, agriculture, and transportation – all of which heavily rely on science to propel their projects. How does the novelty score tool work? The Novelty Scores Tool was created to measure the originality of scientific research by evaluating two distinct aspects of scientifically published texts: content and context. Content novelty is determined by analysing the unique combinations of topics and concepts within a manuscript, while context novelty assesses the distinctiveness of the combination of topics and specialisations of articles cited in the reference list. Both scores take into account not only how surprising the combination of various fields in a publication is, but also how likely it is to be successful. These scores are derived using a mathematical framework introduced by Professors James Evans and Feng Shi from the University of Chicago, which has shown a strong correlation between higher novelty scores and increased citation rates, even in prestigious scientific journals. By providing an objective measure of originality, novelty scores help identify research with the potential for significant impact. How can AI tools support different industries? This AI tool, along with others in the future, can support industries by providing broader scientific insights at high speed and enhancing reproducibility and novelty in scientific research, which are critical for driving innovation and technological progress. For the novelty scoring system specifically, impactful and groundbreaking research can be identified in seconds, helping industries prioritise advancements with the greatest potential to lead to new products, services or solutions. It also fosters a stronger feedback loop between science and technology, where advancements in one drive progress in the other, accelerating innovation in fields like healthcare, biotechnology and beyond. Novelty scores, in this instance, demonstrate if there is room for an invention or new development, and identify when researchers and developers might be ‘reinventing the wheel’. This not only saves time and money but helps industries harness scientific discoveries more effectively to address urgent challenges and drive meaningful progress. The traditional peer review process often takes months or even years to complete and given the subjective judgment, it makes it challenging even for experienced scientists to assess novelty accurately. This makes the process of feedback and assessment incredibly slow, and the system of scientific research less agile in responding and slow to change direction. Moving toward more efficient and objective systems and utilising AI is essential to advancing science and driving industrial projects forward in an efficient and impactful manner, without losing innovation and creativity. How can AI improve science? Using an AI tool such as novelty scores can support scientists’ chances of doing research that is of high interest to publishers. Having an objective quality score that indicates a study is original, is likely to  help streamline the peer review process. With the novelty already validated through advanced algorithms, there is less room for debate during review, ultimately reducing the time spent on discussions, and improving the efficiency of the process. Additionally, researchers can use this tool to enhance the novelty of their work, as it allows researchers to check and refine their research. This ensures that the science being published not only demonstrates high levels of novelty but also meets the rigorous standards expected in scientific communities, ultimately contributing to the advancement of knowledge in the field. You can find out more about how novelty score tools are developed, and the effect they can have on science and access your novelty score here. Get in touch For event sponsorship enquiries, please get in touch at oliver.toke@31media.co.uk or calum.budge@31media.co.ukFor media enquiries, please get in touch with vaishnavi.nashte@31media.co.uk

Eight AI trends to prioritise in 2025

Author: Richard Farrell, CIO at Netcall Whilst 2024 saw the adoption of Generative AI (GenAI) accelerate, it also witnessed significant debates and challenges surrounding its use within business from security and data privacy perspectives, with questions surrounding its true value. As CIOs prepare to tackle AI in 2025, Richard Farrell, CIO of Netcall, shares the top trends that must be prioritised in the year ahead to ensure successful adoption. Balancing Security and AI Demands Cybersecurity remains a top priority for today’s business leaders. As cyber threats escalate, CIOs, in particular, face intensified pressure to secure operations, while CEOs expect fast-track GenAI innovations. Managing both demands in tandem requires careful strategy, with security remaining central to this. Unfortunately, organisations often have finite resources available in this area, making the task of addressing cybersecurity extremely challenging. To overcome this, organisations looking to adopt AI in 2025 would be wise to use solutions that mitigate some of the associated security and data privacy risks. One option is processing AI within the same managed environment as workflows and applications – keeping it local, safe, and secure, and ensuring that sensitive data is protected, and not used for future AI training. Overcoming “Copilot Overload” Many organisations under pressure to bring AI into the business have adopted various copilots this year. However, rather than reducing workflows, the current overload of various copilot systems is likely to complicate them if not managed effectively. Currently, copilots are available across internet browsers, CRMs, and numerous other office applications. This siloed and uncontrolled approach can not only become costly for organisations using multiple copilots, but the information obtained is likely to be inconsistent leading to additional time being spent reviewing it before it can be used. Instead, a streamlined platform approach can help CIOs avoid over-dependence on fragmented AI tools, ensuring robust, cohesive tech operations in the year ahead. From AI Hype to Practical Application  There has been immense hype surrounding AI in 2024 – and GenAI in particular – however, this hype will begin to fade as the focus moves towards practical application in business. According to the Gartner Hype Cycle for Artificial Intelligence, 2025 will see the progress of GenAI slide into the trough of the disillusionment stage as negative press increases, and governments increase focus on regulation.[1] Whilst many organisations can now say they are using GenAI in their business via various copilots, the reality is that this is merely scratching the surface of what the capabilities of AI can offer a business. It may be a tick in the box when it comes to whether a business is engaging with this technology, but it isn’t necessarily driving substantial value. Agility through Embedded AI  Based on this, by 2025, CIOs will need to distinguish hype from genuinely valuable AI applications. Organisations will start to recognise this, and lean towards AI that is embedded within applications, rather than a standalone function. Platform-based applications offer tailored tools that can evolve alongside core teams, enabling CIOs to develop practical solutions with clear business outcomes. Following this embedded approach to AI also allows businesses to accelerate digital initiatives whilst safeguarding core operations from the risks associated with rapid GenAI deployment. Sustainability-Driven IT Choices With increasing scrutiny on sustainability, 2025 will also see CIOs face pressure to make eco-friendly choices surrounding AI deployments. Now that many have picked the low-hanging fruit when it comes to ESG (low-power lighting, green energy contracts, hybrid work locations, and some migration to cloud services), more focus is being placed on sustainability in the supply chain and around new technologies introduced into the business. Whilst determining the exact energy cost of AI models is challenging, it is clear that their carbon footprint is growing at an alarming pace, with a recent article from Carbon Credits suggesting they use more energy than any other type of computing[2].   Attention, therefore, will turn to the introduction of agile frameworks that support sustainability innovation, allowing businesses to scale AI responsibly. Suppliers can support this movement by hosting AI in data centres that are powered by 100% renewable energy[3]. Used effectively and via an integrated approach, AI can also support organisations to become more efficient and sustainable in their operations, by reducing manual processing and significantly cutting carbon emissions[4]. Leveraging Citizen Developers for Faster Transformation To meet rapid digital demands, CIOs will increasingly rely on citizen developers and employees outside of the IT function that contribute to application creation. This is something we expect will continue to increase in the year ahead as the pressure to transform grows, a sentiment also echoed in Gartner’s 2025 CIO Agenda[5]. Within the eBook, Gartner promotes the empowerment of employees in business areas, not just IT staff, to use digital platforms to produce digital solutions. However, with great power comes great responsibility, and ensuring the necessary guardrails are put in place surrounding development at both a citizen and IT employee level is crucial. Fortunately, today, platform-based low-code solutions are enabling broader teams that operate at the coal face of the business to co-create applications, reducing dependency on scarce developer resources while maintaining quality control. This is particularly important in highly regulated industries and environments that are seeing the rise of ‘fusion teams’ or ‘digital factories’ that strike the balance between IT and stakeholders across the business to innovate together to provide the best outcome. Enhanced Data Privacy Controls With privacy regulations evolving, data governance will remain critical in 2025. When considering AI, data – and personal data in particular – continues to pose significant challenges, with organisations needing to ensure they have a lawful basis for processing, and that they are aligning with both current and upcoming regulations. In November, for example, the Data (Use and Access) Bill was discussed in parliament[6] which will see potential changes made to the UK’s data protection legislation. Organisations will need to keep abreast of these legislative changes and ensure they are opting for embedded AI solutions that have integrated privacy by design to help CIOs maintain compliance even with extensive AI use.   AI with Clear

Harnessing Generative AI for Root Cause Analysis

Minimise dev downtime with workflow automation

Author: Neha Khandelwal, Software Design Assurance Engineering Manager, Zebra Technologies Root Cause Analysis (RCA) is a critical yet time-consuming process in the realm of software quality assurance (QA). It requires meticulous investigation to pinpoint the underlying cause of defects, often involving sifting through logs, analysing code, and correlating various system behaviours. QA teams spend significant effort identifying the root cause, which can delay resolutions and increase downtime. As systems grow more complex, traditional RCA methods become less efficient, demanding advanced tools to assist in automating and streamlining the process. Enter generative AI (GenAI)—a burgeoning technology and useful tool for software quality assurance teams that holds the potential to transform RCA by rapidly identifying patterns, predicting causes, and even recommending solutions, ultimately accelerating bug detection and resolution.  One state of the industry survey found that the heaviest use of GenAI (50% of QA teams) is in the form of test data generation, followed by test case creation (48%).  In terms of cognitive AI-based use cases, analysis of test logs and reporting is most prominent, especially in large organisations (38%), followed by AI for visual regression testing. The survey found that 30% of respondents believe that AI can enhance productivity in software quality assurance. What is AI-powered RCA?  AI-powered RCA leverages AI and machine learning to automate the process of identifying the underlying causes of incidents or outages in software/IT systems. Traditional RCA methods require engineers to manually sift through logs, metrics, and telemetry data, which can be time consuming and prone to human error. AI-powered RCA, on the other hand, uses algorithms and models to analyse large datasets, detect patterns, and pinpoint the root cause faster and more accurately.  AI in RCA works by:  Incorporating AI into RCA not only reduces the time spent on troubleshooting, but also enhances accuracy, helping organisations resolve issues more quickly, minimise downtime, and avoid repeated incidents.  Here are some examples of how GenAI can be integrated with existing tools to enhance the RCA process, with example use cases: Integration: Utilise a GenAI model trained on code quality, bug patterns, and best practices alongside SonarQube. This AI can analyse identified code issues and provide context-aware explanations and suggestions for fixes.  Example Use Case: Suppose SonarQube identifies a potential null pointer exception in the code. A GenAI model can analyse the issue, look at the context of the code, and suggest alternative code snippets or defensive coding strategies to prevent the exception. This suggestion is displayed directly in SonarQube’s dashboard for developers to review and implement.  Integration: Integrate GenAI with Postman’s API testing environment to identify and diagnose API failures. The AI can simulate API calls, check the codebase, and provide detailed suggestions on what might have gone wrong.  Example Use Case: If an API endpoint fails during testing due to a timeout, the AI can analyse the request payload, server response, and recent code commits. It may pinpoint that a specific parameter format was modified and suggest reverting it. This analysis is then documented in Postman’s interface, and the suggested fix is directly provided to developers. Integration: Integrate a GenAI model with the Elastic Stack to perform causal analysis of logs. The AI can read complex Kibana dashboards, detect anomalies, and generate natural language explanations about the underlying issues.  Example Use Case: If a specific microservice shows higher-than-normal latency, the AI integrated with Kibana could analyse logs, trace user interactions, and identify that the latency is due to an inefficient database query introduced in the latest update. It can then recommend query optimisations, explain the rationale, and simulate the impact of the fix.  Integration: Integrate GitHub Copilot X (or a similar GenAI code assistant) to analyse code errors within the development environment. The AI can access the codebase, review commit history and provide fixes directly in the IDE.  Example Use Case: Suppose an error is reported due to a recent code merge. The AI can check the changes in Git, simulate scenarios based on the new code, identify the commit introducing the bug, and offer a corrected code snippet. Additionally, it can generate an explanation of why the error occurred, aiding in understanding the root cause.  Integration: Leveraging a GenAI model in Dynatrace can provide insights into detected anomalies and errors. The AI can perform deeper diagnostic evaluations based on performance metrics, logs, and traces.  Example Use Case: If Dynatrace identifies a memory leak, the GenAI can analyse the timeline of system events, correlate them with code changes, and provide a narrative on why the leak is happening, suggesting potential fixes like garbage collection optimisations. The AI can even suggest changes to specific lines of code that may be responsible for memory allocation issues.  By leveraging such integrations, we can turn RCA into a more proactive, data-driven, and intelligent process, reducing time spent on finding issues and increasing overall software quality.   Implications for Software Testers, QA Analysts and the Industry  The integration of AI-powered RCA is reshaping the responsibilities of software testers and QA analysts, highlighting the urgency of embracing this shift. By automating repetitive and time-intensive tasks such as sifting through logs and correlating system behaviours, AI allows QA professionals to focus on higher-value activities like improving processes and ensuring end-to-end quality. However, this evolution also calls for testers to acquire new skills in AI-driven analysis and tool utilisation. With the increasing complexity of software systems, traditional RCA methods are rapidly becoming insufficient, making the adoption of AI tools a necessity for staying competitive and maintaining the speed and precision demanded in today’s software landscape. The integration of AI-powered RCA in testing will also significantly alter the way QA teams address future challenges. One major shift will be the reliance on AI to predict and diagnose issues, which could reduce manual intervention but also create new complexities. For instance, as AI systems analyse datasets to identify potential failures, testers will need to ensure the accuracy and fairness of these AI-generated insights. A challenge lies in validating the decisions made by AI, especially in scenarios where faulty predictions could lead to unnecessary changes

Entries Now Open For The 2025 AI Awards

London, UK – The highly anticipated AI Awards & Summit 2025 is set to take place on 16th April 2025 in London, bringing together AI innovators from across the globe. This event promises a full day of immersive learning and networking, culminating in an evening dedicated to recognising the transformative contributions of AI professionals and organisations worldwide. The AI Summit offers a unique platform for industry professionals to engage in meaningful conversations, explore the latest advancements in AI, and expand their networks. With eight brilliant and trending topics, the Summit covers the most pressing themes and opportunities within artificial intelligence, delivering valuable insights for attendees across industries. By entering the AI Awards, participants gain access to the Summit, unlocking invaluable opportunities to showcase their projects and exchange ideas with like-minded experts. Similarly, registering for the Summit ensures attendance at the prestigious awards ceremony. With 14 main categories encompassing diverse subcategories, the awards celebrate outstanding achievements across AI applications, including: These awards are open to companies of all sizes and industries worldwide, offering a level playing field to showcase innovation. Grant Farrell, CEO & Director of 31 Media, shared his excitement:“The influence of AI is now clear, and our awards program highlights and celebrates those leading the charge. This event not only honours excellence but also fosters collaboration and innovation within this dynamic industry.“ To support entrants, helpful resources are available: For attendees eager to explore Summit topics, the agenda featuring the eight trending themes is available here: Topics for AI Summit Mark your calendars and join us in London on 16th April 2025 to celebrate the achievements driving AI forward. Don’t miss the chance to be part of this global celebration of excellence in AI innovation. For media inquiries or more information, please contact: vaishnavi.nashte@31media.co.uk Get in touch For event sponsorship enquiries, please get in touch at olliver.toke@31media.co.uk or calum.budge@31media.co.ukFor media enquiries, please get in touch with vaishnavi.nashte@31media.co.uk  

Entries Now Open for the Asia Pacific Software Testing Awards 2025

Bangalore, India – The Asia Pacific Software Testing Awards 2025 is now accepting entries, offering an unparalleled opportunity to celebrate excellence in digital technologies. This prestigious awards program recognises the outstanding achievements of businesses, teams, and individuals across the Asia Pacific region and the UAE who have demonstrated exceptional innovation and commitment to enhancing business processes, customer experience, or cultural transformation. The 15 diverse categories include: The awards showcase accomplishments across various domains. The deadline for entries is 28th May 2025, with finalists enjoying the chance to celebrate their success at an exclusive awards ceremony. Judged with complete impartiality and transparency, each submission is anonymised to ensure fairness, allowing projects to be evaluated solely on their merit. Key benefits of participation include: Important Links: Winning one of these coveted awards is a testament to excellence, showcasing a commitment to advancing software testing practices. Submit your entries now and take the first step toward earning industry-wide recognition for your achievements. For further information, please contact:vaishnavi.nashte@31media.co.uk Get in touch For event sponsorship enquiries, please get in touch at olliver.toke@31media.co.uk or calum.budge@31media.co.ukFor media enquiries, please get in touch with vaishnavi.nashte@31media.co.uk  

The growing challenge of secrets management in DevOps and how automation can help

Leveraging automation to optimise order-to-cash

The age of agile software development and the cloud has caused an explosion in enterprise secrets. From passwords, encryption keys and APIs, to tokens and certificates, digital secrets are what control access when data is transferred between applications. They’re essential to maintaining the security of that data, and in turn, the successful operation of digital enterprises. A top emerging security challenge It’s no surprise, then, that secrets are extremely valuable to threat actors. Businesses need a robust secrets management strategy in place in order to not only effectively manage the respective secrets across their lifecycle, but also protect them from compromise. This is becoming a significant challenge within DevOps specifically, with Thales’ Data Threat Report finding that secrets management was identified as the top emerging DevOps security challenge by respondents. The number of non-human entities – such as apps, APIs, containers and microservices – has also significantly increased in the past few years, adding to the complexity. Applications have become more agile and flexible, increasingly relying on APIs to draw on other sets of data and services. DevOps teams in particular, who are right at the centre of creating and managing these applications, therefore have their secret management challenges. With dozens of orchestrations, configuration management and other tools used every day, these operations rely on an array of automation and other scripts that require secrets to work. Compromise could lead to software supply chain attacks, impersonation, or worse. From an operational perspective, expired or unmanaged secrets can also cause system outages. Figures from the likes of the ITIC have put the cost of IT downtime at a minimum of $5,000 per minute, much of which will have been caused by misconfiguration, expired certificates, or other such issues that having rigorous secrets management in place would eliminate. Eliminate ‘security islands’ The tight deadlines and high expectations of modern software development often mean productivity is prioritised over security, to ensure speedy delivery. Secrets might for instance be hardcoded into applications or configuration files, to allow swift access to other applications and work quickly. If those credentials have privileged access, they can be extremely powerful in the wrong hands. Manually sharing secrets, or keeping them to a limited few also creates ‘security islands’, where only a small group in an organisation have specialised access no one else does. This can be a major brake on productivity – not to mention increasing the chances of employees seeking workarounds or turning to shadow methods to get what they need to get done. Centralising the verification and generation of new credentials and secrets reduces the amount of manual work needed, and allows developers’ workflows to continue more easily. It’s important to say this is not the fault of developers, who are often under extreme time pressures to deliver new builds and applications and aren’t necessarily motivated to dwell too long on the security implications. Instead, it is the responsibility of IT leaders to establish the tools and environment needed to ensure developers can easily incorporate good secrets management practices into everything they do. Keeping secrets out of source code Secrets need to be kept separate from the source code, and using a centralised vault will help with that – developers can use an API call instead to bring the necessary secret in when they need to. A management policy that can support dynamic secrets is even better because these are time-limited. As they automatically expire, even if they are somehow compromised they will be of no use in attacking a business. This is where automation comes in. Secrets can be set to automatically generate and rotate at a predetermined frequency, alongside their storage and distribution. By removing the human element from the secrets management process, you eliminate the use of default, hardcoded, or even duplicate secrets. Any secrets management policy – including those like Zero Trust -determined by the organisation becomes far easier to govern and enforce. A crucial part of DevSecOps While most cloud providers have at least one service that offers secrets management, such as AWS’ secret manager, or Azure’s Key Vault, in a multi-cloud world, organisations may find it more effective to use a cloud-neutral secrets management solution that can work across multiple cloud providers, as well as any on-premise locations they might have too. Centralised secrets management additionally makes logging access, monitoring usage and sending alerts more feasible – not to mention being able to scale access management with the scale of applications as they grow. There are also numerous government regulations around the world that mandate specific controls for efficient secret storage and access, from GDPR to HIPAA – with secrets management tools helping to achieve compliance.   Alongside measures like strong encryption, firewalls and regular patching and updating, a good secrets management platform complements these measures and therefore helps fortify overall cybersecurity resilience. Get in touch For event sponsorship enquiries, please get in touch at olliver.toke@31media.co.uk or calum.budge@31media.co.ukFor media enquiries, please get in touch with vaishnavi.nashte@31media.co.uk  

Software standards at risk as QA teams face new levels of pressure

Digital quality is integral to the development and rollout of new services. Yet managers and development teams alike often underestimate the value of quality assurance (QA). As a result, many of the checks and balances required to deliver high-quality digital experiences are overlooked, putting companies at risk of losing customers and harming their reputations. Rob Mason, the CTO of Applause, believes the situation could reach boiling point in 2025 as businesses race to release new apps and features, placing more pressure on under-resourced QA teams. His predictions below offer caution and advice to organisations that want to ensure digital quality across the software development lifecycle. The quantity of substandard software releases will increase: In recent years, companies’ main strategy for staying ahead of the competition has been twofold: invest in developing new features and be the first to release them to market. Novelty and speed were prioritized above all else. Requirements like usability, accessibility, payments and localization took a backseat. And this has simply resulted in superficially shiny products that don’t actually deliver value to users. QA teams now face a pressure cooker. They are being allocated fewer and fewer resources to test digital experiences that are getting more and more complex. Unless QA teams are given the time and space to influence decision-making, we will see this situation reach a boiling point next year as companies start to register its impact on customer retention, revenue and reputation. Complexities of Gen AI testing will heap more pressure on QA: Gen AI has brought new challenges for QA teams. Unlike traditional software, Gen AI’s non-deterministic outcomes have introduced a new level of uncertainty into testing that is new territory for QA professionals across the board. It’s a steep learning curve, yet many teams have not received retraining support. Add onto this the potential reputational damage that inaccurate, biased and toxic responses could cause companies, and QA teams suddenly find themselves under new levels of pressure. In 2025, QA will be integral to Gen AI’s success. Inaccuracies, hallucinations and biased content continue to plague LLMs and serious slip-ups could lead to media headaches or even legal action on top of user churn. Proper training and testing are the foundation of high-quality Gen AI products, making embedding QA into the earliest stages of product development critical to business success in 2025. Businesses risk losing overworked and undervalued QA professionals: QA teams, at companies that don’t appreciate the role quality assurance plays in business success can find themselves in a catch-22 situation. Insist on ensuring products meet the highest quality guidelines and they risk not meeting the demands of the business. Neglect quality or even just do the bare minimum and they could face the blame when customers complain about bugs and poor user experiences. On top of this, QA teams today tend to be understaffed, face dwindling budgets and may be struggling to adapt to new technologies like Gen AI. If companies want to retain and attract top QA talent in 2025, they need to take action. Most importantly, they need to involve their QA staff earlier on in the development process to ensure their concerns and advice are heard from the start. The most mature businesses recognize the strategic role QA plays in defining product roadmaps and appreciate that quality is the new frontier in winning and retaining customers. Get in touch For event sponsorship enquiries, please get in touch at olliver.toke@31media.co.uk or calum.budge@31media.co.uk For media enquiries, please get in touch with vaishnavi.nashte@31media.co.uk

66% of LinkedIn Users Believe AI Should Be Taught in High Schools

Artificial Intelligence (AI) should be introduced as a subject in high schools, given its growing importance in job postings. This is the opinion of LinkedIn’s community, surveyed by OPIT – Open Institute of Technology, an EU-accredited academic institution led by Professor Francesco Profumo, former Minister of Education, and Riccardo Ocleppo, Founder and Director. According to the survey, 66% of LinkedIn users think it is essential to teach AI in high schools. Additionally, 72% observed an increase in AI-related mentions in job postings, while 48% stated that AI proficiency is a key requirement for the companies they applied to. AI is primarily used for text generation (38%), research and analysis (38%), and translations (23%). The survey was conducted among OPIT’s followers, a global audience of approximately 8,000, as part of the institution’s October 2024 academic year launch. Participants included professionals, students, and tech enthusiasts, providing valuable insights into current perceptions and trends surrounding AI. The findings highlight a growing recognition of AI’s transformative role. AI is no longer a distant concept but a reality reshaping everyday work practices. Companies and professionals are rapidly adapting to remain competitive in a market where AI skills are increasingly indispensable.  “The growing awareness of AI‘s importance in the workplace suggests that professionals are actively integrating these skills into their daily practices. This shift opens opportunities for innovation and professional growth,” explained Riccardo Ocleppo, OPIT’s Founder and Director. “The technological transformation we are witnessing is reshaping the job market, with AI becoming central to this evolution. Rather than fearing it, we must study and understand it to harness its potential fully. At OPIT, integrating AI education across all degree programs is a key focus. This approach equips students with the tools to succeed in a rapidly changing professional landscape driven by digital advancements.” Innovative Degree Programs to Meet AI Demands Since September 2023, OPIT has offered a bachelor’s degree in Modern Computer Science and a master’s in Applied Data Science & AI. In September 2024, four additional programs were launched: a bachelor’s in Digital Business and master’s degrees in Enterprise Cybersecurity, Digital Business & Innovation, and Responsible Artificial Intelligence. These additions bring OPIT’s total offerings to six-degree programs. The strong demand for its Computer Science and Digital Business degrees has led OPIT to reopen applications for January enrollment. Currently, OPIT serves over 300 students from 78 countries, with the largest contingents from Italy (31%) and Europe (36%), followed by North America, Asia, Africa, Latin America, and the Middle East.  “Introducing AI education at the high school level is crucial,” Ocleppo added. “This ensures students are better prepared for university studies and equipped with foundational knowledge beyond superficial or recreational use of this technology. Today’s rapidly advancing AI landscape requires university faculty to stay up-to-date with new developments and emerging applications. This connection between teaching and innovation is critical, as traditional methods quickly become obsolete. Transferring these cutting-edge skills to students is not just beneficial but essential.” Get in touch For event sponsorship enquiries, please get in touch at olliver.toke@31media.co.uk or calum.budge@31media.co.ukFor media enquiries, please get in touch with vaishnavi.nashte@31media.co.uk    

Key AI Predictions within the growth and sustainability sector

As Artificial Intelligence continues to chip away at the environment, next year, data centres will face a pivotal moment. Facilities will start to realise that there isn’t enough energy. Companies that were influenced by competitive urgency and rushed into AI, now face decisions regarding cost and sustainability of the technology. For example, some AI in-house setups cost upwards of $300,000 in hardware alone.   Check out the predictions below to learn about AI growth and sustainability. The predictions also cover data centre accountability, tech-savvy Gen Zs, the rise of AI agents, the role of digital twins in data centres and more. Entries are now open for the 2025 AI Awards. Check out all the award categories here.rr Balancing AI Growth and Sustainability – Mark Fenton, Product Engineering Director at Cadence  In 2025, data centres will face mounting pressure to reconcile AI’s surging energy requirements with strict sustainability goals, sparking an industry-wide rethink on AI applications. The infrastructure required to deliver on AI is poised to drive a 160% increase in data centre power demand. This challenge is creating a pivotal moment for data centres to support high-density compute loads while advancing their environmental commitments. Companies will face a new crossroads. Many that initially rushed into AI, driven by competitive urgency, will now reevaluate its financial and energy impact, with some in-house setups costing up to $300,000 in hardware alone. This shift is likely to push organizations toward selective, high-value AI applications that provide stronger operational returns, including within data centres themselves. However, demand will still remain at a high level, stretching capacity to its limits. As such, tools like digital twins will be essential for data centres to meet AI goals sustainably, allowing operators to proactively manage power, integrate renewable sources, and optimize cooling measures to meet AI’s GPU usage demands. With these advancements, data centres can help organizations make AI investments both impactful and environmentally responsible. Data Centers AI Era Revamp – David King, Senior Principal Product Engineer, Cadence  Data centres will face a pivotal moment next year as energy usage, especially to power AI, continues to rise. Facilities will start to realize that there really isn’t enough energy. While newly built, AI-optimized facilities can be better suited to handle these requirements, retrofitting older data centres to support increased power as well as cooling is required to meet demands. However, it is costly and complex. This pressure is prompting operators to plan both infrastructure upgrades and invest in purpose-built facilities designed to power AI. Amid these changes, digital twins will be crucial for enhancing data centre efficiency and sustainability in both new and existing data centres. By simulating physical facility environments, digital twins allow operators to optimize power distribution, improve cooling techniques, and test energy changes, helping to maximize resource use and reduce stranded capacity. This technology not only makes the most out of existing space, it also supports sustainable growth, setting a new standard for energy-efficient, AI-capable data centres. EU’s 2025 Energy Efficiency Directive Will Prompt Data Center Accountability  – Mark Fenton, Product Engineering Director at Cadence  The EU Energy Efficiency Directive’s new reporting requirements, starting in May 2025, will mark a significant step in measuring energy and water usage within the data centre industry on a wide scale. By establishing initial data points, the directive will enable an ongoing comparison of industry performance, potentially paving the way for new regulations or targets that promote greater energy efficiency. The results will likely reveal a diverse landscape, where some companies, particularly within tech, show measurable progress, while others lag behind, exposing varying levels of commitment to environmental goals. What’s more, with public interest in data centres’ resource use on the rise, these findings could invite heightened scrutiny, especially if data points to excessive energy consumption or local grid strain. This may lead to “naming and shaming” by the media, heightening societal pushback even further. However, the EU’s transparency-driven approach and heightened scrutiny should encourage data centres to adopt greener practices and utilize tools like digital twins, both to meet compliance standards and mitigate public backlash. Ideally, this will set a new benchmark for sustainability and accountability across the sector. Gen Z’s Purposeful Mindset Will Help Close the Skills Gap – David King, Senior Principal Product Engineer, Cadence  From 2025 onwards, the data centre industry will see a substantial generational shift as seasoned professionals retire and younger, tech-savvy talent bring in specialized skills in AI, automation and sustainability. Traditionally focused on physical infrastructure, data centre roles are evolving to require advanced technical skills like proficiency with simulation software such as digital twins. These tools are crucial in modern data centres for optimizing energy use, airflow, and resource allocation, marking a proactive shift toward efficiency and sustainability. Get in touch For event sponsorship enquiries, please get in touch at olliver.toke@31media.co.uk or calum.budge@31media.co.ukFor media enquiries, please get in touch with vaishnavi.nashte@31media.co.uk