Connect with us

Technology

The state of Artificial Intelligence in 2024

By Maxime Vermeir, Senior Director of AI Strategy, ABBYY

This year, we saw innovation teams experimenting with a variety of automation tools powered by artificial intelligence (AI). As enterprises navigate the potential for business value through large language models (LLMs) like generative AI, adoption of AI continues to grow increasingly widespread. According to recent research, the large majority (89%) of IT executives say that they have AI strategies in place, with 37% having a roadmap spanning three to five years.

Organisations were surrounded with AI hype in 2023 but have since had time to cut through the noise and determine the best business use cases for using it in their operations. This resulted in a realisation that despite their profound potential to generate value, the most powerful general-purpose AI tools can be unscalable, costly, and resource-consumptive, rendering them unsuitable for many enterprise automation goals. However, enterprises that don’t find a way to apply specialised AI solutions to business goals will find themselves falling behind their competitors.

In 2024, there is a need for purpose-built AI that will solve specific pain points effectively, efficiently, and in a scalable and resource-conscious way.
Key challenges and focuses for businesses in 2024 will be strategically integrating AI into organisations, measuring the success of AI implementation, and managing the ethical and legal risks of AI while staying ahead of the innovation curve.

AI strategies

In order to harness the power of AI, businesses need to anchor their AI strategies around clear, purpose-driven goals that align with business outcomes. These are three steps businesses should follow to establish effective AI strategies:

  1. Identify Clear Objectives:
    • What business objectives do you want to achieve with AI? Whether it’s improving operational efficiency, enhancing customer experience, or driving innovation, it is crucial to clearly define your goals and the metrics by which you’ll measure success.
  2. Choose Specialised AI Solutions:
    • The versatility of generalised AI can seem appealing, but opting for specialised, contextual AI solutions tailored to specific business challenges are more likely to deliver accurate and actionable insights with less cost and risk.
  3. Invest in Quality Data:
    • Relevant, high-quality data is necessary for successful AI implementations. Ensure your data is clean, organised, and accurate to real-world scenarios your AI solutions will encounter.

Measuring success of AI projects

From ABBYY’s perspective, the crux of measuring success of AI initiatives lies in the tangible impact they have on business processes, rather than just the technical metrics. Metrics like F-scores can provide useful insights into the performance of AI models, but they don’t necessarily translate to how effective they are in the real-world. Success metrics should always go back to how AI can enhance business operations.

The three main metrics we prioritise are those that reflect direct business value. These include:

  • Straight-Through Processing Rate (STPR): An increase in STPR means that more transactions or processes are being completed without manual intervention thanks to AI
  • Time Saved: Efficiency gains can be estimated by measuring the time saved by implementing AI solutions
  • Return on Investment (ROI): This captures the financial value from AI initiatives and demonstrates the cost-effectiveness and value add to the business. In 2023, an average of 57% respondents anticipated seeing at least twice the cost of investment ROI, while only 43% delivered this increase.

By focussing on these metrics, businesses can ensure their AI initiatives are delivering real value, driving process efficiency, and contributing to the bottom line. This approach can help businesses achieve meaningful enhancements in how they operate and deliver value.

Addressing the environmental impact of AI

Businesses will continue to grapple with the trade-off between generative AI capabilities and their ecological impact, such as immersive search capabilities that consume large amounts of energy. Using generative AI today to search and summarise data consumes 10 times the energy of a normal search, which is unsustainable in the global effort to reach an average planetary temperature of 1.5 degrees by 2025. There are alternative AI models that use robust machine learning and natural language processing with business rules for highly specified purposes; for example, in transportation and logistics, extracting data from the 44M bills of lading issued every year and processed by at least 9 stakeholders at 12 touchpoints with a highly accurate AI-model, trained on thousands of bills of lading.

The growing influence of regulation

As AI technologies continue to permeate various sectors, regulatory bodies will likely ramp up scrutiny to ensure ethical use and data privacy. This will also include measures to ensure that claims made by AI vendors are accurate and verifiable. These frameworks and regulations will sensitise users to the potential risks that shadow the possibilities and will bring business users back to the reality of integration challenges.

With more demand for transparency among businesses and regulators in AI decision-making, advancements in Explainable AI (XAI) will gain momentum, as it helps to demystify complex AI models and foster trust among users and stakeholders.

Embracing a human approach to AI

C-suite leaders have already begun to discover the hidden costs and ecological impact of generative AI, lifting the veil of hype to reveal practical challenges of integrating AI applications into their organisation’s infrastructure. Still, artificial intelligence has proven itself as a transformative tool that will be instrumental in modernising businesses and driving operational excellence.

In order to overcome these challenges, business leaders need to embrace a more human understanding of their data and processes. This involves bridging the gaps in understanding between AI teams and the business side of the organisations they serve. By fostering collaboration between AI specialists and professionals with actionable, hands-on business knowledge, enterprises can ensure that AI is driving operational excellence in the right areas and yielding truly actionable insight. Businesses need to carry this approach through impact assessments, strategising, implementation, and measuring success.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Social Engineering Tactics Are Evolving, Enterprises Must Keep Pace to Mitigate

By Jack Garnsey, Subject Matter Expert – Email Security, VIPRE Security Group

Social engineering attacks by cyber criminals are not only relentless, but they are rapidly evolving with new tactics being deployed too. However, phishing remains the preferred social engineering tactic. This is demonstrated by research that has processed nearly two billion emails. Of these, 233.9 million emails were malicious – showing that cybercriminals are increasingly adopting foul links that require ever more investigation to uncover. This is possibly because current signature-based investigation tools are now so effective and ubiquitous that threat actors are forced to either engineer a way around them or get caught.

Furthermore, the research detects these malicious emails due to content (110 million) and due to links (118 million) – almost evenly split between these. Following content and links, malicious emails were also discovered due to attachments, standing at 5.44 million.

Common approaches to social engineering

Criminals are using all manner of approaches for social engineering. They are using spam emails to fraud, especially business email compromise. With the use of AI technology such as ChatGPT and others, phishing emails are becoming even harder for people to identify. The tell-tale signs of poor sentence construction, spelling mistakes, lack of subject context and so on, no longer exist.

PDF attachments as an attack vector is gaining favour with criminals. Majority of devices and operating systems today have an integrated PDF reader. This universal compatibility across all platforms makes it an ideal weapon of choice for attackers looking to cast a wide net. One reason is because malicious hackers can make us think that there’s payment-related information inside. Once opened, the PDF potentially contains a link to a malicious page or releases malware on to the PC. Criminals are using malicious PDFs as a vehicle for QR codes too.

Stealing passwords is another commonplace phishing technique. Many of us will recognise emails urgently alerting us to update the password for the applications we use on a daily basis in our professional and personal lives. An example is a password update request from Microsoft – “Your Microsoft Office 365 password is set to expire today. Immediate action required – change or keep your current password.”  In fact, Microsoft was the most spoofed name in Q3 of 2023.

Heard of callback phishing? Cybercriminals send an email to an unsuspecting employee, posing as a service or product provider. Instilling urgency, these emails prompt the individual to “call back” on a phone number. So, when the user calls them, they are duped out of their information over the phone, or they are given “sign in” links to verify information and end up losing sensitive data in the process. The absence of malicious files within the content of either the email or attachments makes it easier to slip past the radar and evade detection.

A relatively new trend that is gaining momentum is the utilisation of LinkedIn Slink for URL redirection. To allow its platform users to better promote their own ads or websites, LinkedIn introduced LinkedIn Slink (“smart link”). This “clean” LinkedIn URL enables users to redirect traffic directly to external websites while more easily tracking their ad campaigns. Clearly a useful feature, the problem is that these types of links slip through the net of many security protocols and so have become a favourite of social engineers.

Education, education, education

All hands on deck, the saying goes! In that vein, a comprehensive strategy is needed to ensure protection – from timely patching, archiving or backing up data, monitoring and auditing access controls and penetration testing through to properly configuring and monitoring email gateways and firewalls and phishing simulations.

However, underpinning all this must be regular security education and awareness training to ensure that employees are always up-to-date on knowledge and vigilant against the newest social engineering techniques that criminals are deploying to fraud them with. It helps to embed a cybersecurity conscious culture and security-first attitude in the workplace.

A key focus of the education and training programme must be on motivating employees to take an active role in threat detection and protection. Good cyber hygiene knowledge is about giving employees peace of mind that their organisation and job are secure, but also that they have the knowledge to protect their friends and loved ones.

Employees need regular training reinforcement throughout the year if they are to be expected to remember and apply best practices over this time. Single, annual courses or classroom sessions are not sufficient given that people forget training shortly after these sessions. If adult learning best practices and techniques, such as spaced learning, are not implemented as part of a security awareness training program, then it will not succeed.

Additionally, targeted training must be designed for role types – far too often, a broad-brush approach to cyber training and education is undertaken, making it a tick-box exercise. For example, a company’s risk and compliance team needs cyber training that takes into account the demands of regulatory bodies, business development teams need to know all about incident reporting, the product development department must be trained on how best to secure the software supply chain, security teams must be trained on advances in threat detection, end users must understand how to spot a phishing email or deepfake, and so forth. Training that is tailored specially for business leaders is equally important.

There is no end in sight when it comes to social engineering attacks. End users of technology are constantly under attack, vigilance supported by security education and knowledge to help intuitively spot social engineering is a critical defence – be that in the form of deceitful emails, malicious QR codes and links, or any other such techniques.

Continue Reading

Business

Navigating the Ethical Landscape: A Guide for Small Businesses Embracing AI Technologies

By Stefano Maifreni, COO and founder of Eggcelerate

Artificial intelligence (AI) technologies have revolutionised how businesses operate, offering countless benefits and opportunities for growth. From streamlining processes to improving customer experiences, AI has become an essential tool for small businesses. However, with these advancements come ethical challenges that must be addressed. This article will explore the ethical maze small companies face when leveraging AI technologies. We will delve into the key considerations they need to make to ensure they make ethical choices and maintain transparency throughout the process.

Ethical Considerations in AI Implementation

Embracing AI technologies responsibly can bring numerous benefits to small businesses. AI can automate repetitive tasks, freeing time for employees to focus on more complex and creative work. It can also enhance decision-making processes by analysing large amounts of data and generating valuable insights. Additionally, AI-powered chatbots and virtual assistants can improve customer service, providing prompt and personalised support. By adopting AI technologies responsibly, small businesses can increase efficiency, productivity, and customer satisfaction.

Ethical AI refers to the responsible and unbiased use of artificial intelligence technologies. It involves ensuring that AI systems are designed and deployed in a manner that respects human values, privacy, and fairness. Ethical AI also emphasises transparency and accountability, making it crucial for small businesses to align their AI practices with ethical principles.

When implementing AI technologies, small business owners must consider various ethical factors. One crucial consideration is the potential bias in AI algorithms. Machine learning models are trained on historical data, which can unintentionally reflect societal biases. Small businesses should ensure that their AI systems are audited and monitored for fairness. They should also strive to diversify their data sources to minimise biased outcomes and be transparent about their algorithmic decision-making processes.

Data privacy is another significant ethical concern. Small businesses must handle customer data with utmost care, ensuring it is collected and used in compliance with relevant regulations, such as the General Data Protection Regulation (GDPR). Businesses should obtain informed consent from users, clearly communicate how their data will be used, and implement robust security measures to safeguard sensitive information. Prioritising data privacy builds customer trust and demonstrates a commitment to ethical practices.

Transparency in AI decision-making is also crucial. Small businesses should communicate when AI is utilised and explain automated decisions whenever possible. This transparency helps build customer trust and ensures that AI is not perceived as a black box but as a tool that operates ethically and aligns with the business’s values.

Balancing Automation with Human Touch: Ensuring Ethical AI Practices

Small business owners should follow certain best practices to embrace AI technologies responsibly. Firstly, they should conduct a thorough ethical analysis before implementing any AI system. This analysis involves identifying potential risks and biases and considering the impact on stakeholders.

While AI provides automation and efficiency, balancing technology and the human touch is essential. Small businesses should consider the impact of AI on their workforce and ensure that AI systems do not replace employees but rather support them. It can involve upskilling employees to work collaboratively with AI technologies or reallocating resources to more meaningful tasks that require human ingenuity and empathy.

Small businesses should also establish clear guidelines and policies for AI usage, ensuring that employees understand the expected ethical standards.

Regular and ongoing training is crucial for employees involved in AI implementation. It ensures they are equipped with the knowledge and skills to navigate the ethical complexities of AI. Small businesses should also encourage a culture of transparency and accountability, where employees feel comfortable raising ethical concerns and discussing potential biases or risks associated with AI technologies.

Building Trust and Transparency with Customers through Ethical AI

Ethical AI practices play a significant role in building trust and transparency with customers. Small businesses should communicate their AI usage and data practices to customers in a clear and accessible manner through privacy policies, consent forms, and public statements that outline the steps taken to protect customer data and ensure ethical AI practices.

Additionally, small businesses should be responsive to customer concerns and feedback related to AI technologies. Addressing customer questions and providing mechanisms for redress decisions can foster trust and demonstrate a commitment to ethical practices. Small businesses can differentiate themselves in the market by prioritising transparency and building long-lasting customer relationships.

Resources for Small Business Owners to Learn About Ethical AI

To navigate the ethical complexities of AI, small business owners can access various resources. Online courses and tutorials, such as those offered by universities and technology companies, provide valuable insights into ethical AI practices. Industry-specific conferences and webinars offer opportunities to learn from experts and share experiences with other small business owners.

Furthermore, engaging with professional organisations and communities focused on ethical AI can provide a supportive network for small business owners. These communities often offer forums for discussion, access to research papers, and guidance on best practices. By actively seeking out resources and staying updated on the latest developments in ethical AI, small business owners can make informed decisions and drive positive change within their organisations.

Conclusion: Embracing Ethical AI for a Sustainable Future

As AI technologies continue to evolve, small businesses must navigate the ethical maze to embrace these technologies responsibly. By understanding the ethical implications of AI algorithms, prioritising data privacy, and following best practices, small businesses can leverage AI to drive growth and innovation while maintaining transparency and accountability.

Building trust and transparency with customers through ethical AI practices is essential for small businesses to succeed in the long run. Small business owners can ensure they are making ethical choices and fostering a sustainable future by considering the impact on stakeholders, balancing automation with the human touch, and integrating ethics into AI-driven decision-making.

With the right mindset, resources, and commitment to ethical AI, small businesses can embrace AI technologies’ potential and become ethical leaders in their respective industries. Let us navigate the ethical maze together and unlock the true potential of AI for a better future.

Continue Reading

Business

Top four compliance trends to watch in 2024

Robert Houghton, founder and CTO at Insightful Technology, discusses the top trends financial institutions should look out for in 2024

As financial institutions gear up for the next 12 months, it’s time to reflect on the key trends and developments, considering how they’ll shape the year ahead. While the financial landscape is constantly evolving, we believe there are four important issues that will shape 2024:

  1. AI-powered compliance

AI and automation are the buzzwords of the year, significantly changing our work landscape and set to transform our economy and social norms.

Take generative AI. It’s a game-changer for financial institutions in streamlining and securing compliance processes. Imagine a trading floor where every call and message is monitored. Certain phrases or words will trigger an automated alert to the compliance team.

These alerts are typically sorted into three tiers of concern. A low-level alert might be triggered by a trader swearing in a conversation. These are common occurrences that result in hundreds of daily alerts, usually reviewed manually by offshore companies. This traditional review process is time consuming, open to mistakes and inconsistent.

Enter generative AI, with its dual capabilities. Firstly, it can spot the misdemeanour in real-time. Secondly, it can understand the context to see what risk it poses. If someone swore, perhaps because they were quoting a strongly worded news story, it’s not a risk. Generative AI can tell the difference between that and someone using bad language in anger, thereby reducing false positives.

In the coming year, financial institutions will look at how these AI and automated decision-making processes can be explained, recorded and saved. However, it can’t be a “black box” that holds the fate of a trader within. By creating an audit-friendly trail, businesses will improve their chances of avoiding regulator penalties.

  1. A smarter monitoring for the hybrid era

In today’s work-anywhere culture, monitoring employees in regulated sectors like finance is key to managing risk.

But it shouldn’t turn into a nine-to-five spying game. It’s a mistake to treat remote work as if it mirrors an office setting; this can damage trust. Instead, monitoring should aim to understand work patterns, just like a heart monitor detects irregularities, signalling when the compliance team should take notice.

In the shadow of US banks facing over $2 billion in fines for unchecked use of private messaging and personal devices[i] – with the SEC imposing a $125 million penalty earlier this year[ii] – the UK’s financial sector is on alert. The Financial Conduct Authority (FCA) is already looking into the matter and questioning banks about their use of private messaging, as the watchdog decides whether to launch a full probe.

Financial firms must be proactive and ensure their Risk Review includes the provision to document all comms channels used by those personnel affected. From there clearly documented policies can be presented to the regulator. Ensuring review of those policies are adhered to is an important part of the process. They need to maintain a delicate balance between productivity, employee wellbeing and strict adherence to regulations in the hybrid work era.

  1. A demand for transparency

In 2023, the banking sector was shaken by a wave of raids on giants like Société Générale, BNP Paribas and HSBC, as part of a large tax fraud probe in Europe[iii]. Over $1 billion in fines looms as the industry, still shaky from significant bank failures[iv], faces a heightened demand for transparency.

Investigations into banks are essential when there’s evidence of criminal conduct, but they must be conducted with care. Raids can expose vast amounts of sensitive data, affecting not just the banks but numerous customers. Authorities should focus only on specific accounts with clear signs of foul play rather than casting a wide net. 

The Paris raids, in particular, were an alarming example of the vulnerability of personal information, with hundreds of thousands of accounts indiscriminately scrutinised. These raids have set a dangerous standard where privacy is secondary.

As we approach next year, the financial sector must grapple with this double-edged sword: pursuing fraudsters while safeguarding individual privacy. The presumption of guilt will remain rife in the financial sector, however this will be partnered with a call for institutions and regulators to be more transparent with their activity.

  1. A one source advantage

As the stakes for non-compliance get higher and regulators scrutinise financial firms more closely, institutions will need to have a more comprehensive and centralised approach to data integrity, management, access and risk management. This includes creating ‘one source’ of data, which is a single, authoritative source of truth that can be used for risk analysis, proof of adherence to policy, reporting, analysis, and compliance.

Currently, many banks have a patchwork of data silos, which are collections of information that are not easily accessible to the bank because they are recorded or stored differently. This makes it difficult for banks to get a complete picture of their data and to comply with regulations.

Historically, this would have required a significant investment in time and resources. Now, modern solutions simplify this process, offering a more efficient path. In the long run, having a single source of data will help banks reduce costs, improve compliance and make better decisions. For those who continue to turn a blind eye to this issue will face financial penalties, operational risks, and irreparable reputational damage.

Overall, it’s safe to say that 2024 will continue to prove the need to strike a balance between technology and people in a bid for demonstratable compliance. As financial institutions prepare, they must consider their position on the work-anywhere spectrum, and how the combination of AI, automation, transparency and leadership can ensure they’re living by the letter of the law next year.

Roll on 2024!

[1] https://www.bloomberg.com/news/articles/2022-09-27/wall-street-whatsapp-probe-poised-to-result-in-historic-fine#xj4y7vzkg

https://www.sec.gov/news/press-release/2023-149

https://www.reuters.com/markets/europe/french-financial-prosecutors-search-bank-offices-over-dividend-stripping-2023-03-28/

https://www.nytimes.com/article/svb-silicon-valley-bank-explainer.html

Continue Reading

Copyright © 2021 Futures Parity.