Connect with us

Business

Is AI better at making money in investment markets?

Source: Finance Derivative

\Michael Kodari, CEO of KOSEC – Kodari Securities

Can AI make better investment decisions than humans? Maybe, but the answer is a bit more complicated than that.

AI is already widely used in investment markets,and we’ve now seen all forms of ‘algorithmic trading’ used by large-scale hedge funds and specialist fund managers. In fact, there are estimates that algorithmic trading accounts for more than 60 per cent of trading in US equity markets alone. The Australian market, being smaller and less liquid, most likely accounts for a lower percentage of algorithmic trading activity, but it is still significant, nevertheless .

Despite this trend, there’s still a lot of ‘grey area’ when it comes to whether it ‘outperforms’ human decision making.

Fast, bias-free decision making

The primary benefit of relying on AI for investment or decision-making is that the trade is executed, literally, within a nanosecond of the price-sensitive information being released. This is something that a human simply cannot do. Apart from the reaction time, humans have toilet breaks, long lunches and sick days. The algorithm, on the other hand, does not.

There are other advantages that an algorithm, or an investing robot or “bot”, has over humans. A robot governed by an algorithm shows no emotional bias. Most humans, when making trading and investment decisions, are driven by emotions like fear, greed, and prejudice In fact, a recent. survey found  that 66% of investors have regretted an impulsive or emotionally charged investing decision. Alarmingly, 32% admit trading while intoxicated.

Some of these biases we see that humans have, which aren’t prevalent in ‘robots’ include:

  • Confirmation bias: An investor’s inclination to selectively seek out information that supports their existing views and ignore or dispute information that does not support their existing views.
  • Anchoring: This is the natural tendency of investors to attach their views to irrelevant, outdated, or incomplete information in making investment decisions.
  • Herd mentality: People are predisposed to a herd mentality. When it comes to investments, that means they often base decisions on the consensus of a larger group, rather than on what makes the most logical sense. As the world’s smartest investors know, it is not the crowd that makes money, it’s the individuals that do.
  • Loss aversion: Psychologists tell us that there is a natural tendency among investors to prefer avoiding a loss, to realising a gain. An algorithm knows that, logically, the financial outcome is identical. However, humans tend to avoid incurring a loss, even though there are times when this defies logic. A common example is investors, reviewing their investment portfolio who refuse to sell their ‘losers’, but insist on keeping their ‘winners’. There is no logic that supports a decision on this basis.

Investment vs trading decisions

Does algorithmic trading have relevance to investment decisions, as opposed to trading decisions? Evidence suggests that many of the features and benefits of AI inherent in algorithmic, high-frequency trading are relevant to investment decision-making.

Some key benefits of ‘robots’ being used in investment decisions include the elimination of emotional conflicts, speed of execution, and the ability to analyse and assimilate vast amounts of investment-related data quickly and objectively. These data points can include macro-economic statistics like inflation, interest rates, employment numbers, commodity price movements, currency  and lead indicators of GDP like consumer confidence surveys and job advertisements. All of these factors impact share price movements in the short-term, as well as the medium- and long-term.

That being said, investment decisions have a longer time horizon and may have different objectives when taking certain factors into account, such as taxation treatment, liquidity needs, capital security and regulatory matters. In other words, investment decisions are typically based on qualitative factors while trading decisions rely more on quantitative analysis.

So, the answer to AI being ‘better at humans’ when it comes to investing in the long term, is not as clear cut as share trading, where AI is already firmly entrenched.

Putting the augmented in AI

Perhaps if not artificial intelligence, then augmented intelligence, is the future role of algorithms in investment decision-making.

In this way, a degree of collaboration between artificial intelligence and human intelligence, without replacing human intelligence, may very well be the way of the future when it comes to scaling effective investment decision-making. As the saying goes, “Money never sleeps”.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Why financial institutions must prioritise contact data quality if serious about fraud prevention

Source: Finance Derivative

By Barley Laing, the UK Managing Director at Melissa

According to Nasdaq’s 2024 Global Financial Crime Report $3.1 trillion of illicit funds flowed through the global financial system in 2023.

As a result, it’s not surprising that most in financial services are investing heavily in advanced ID verification technology to protect themselves from fraud and meet Know Your Customer (KYC) and Anti-Money Laundering (AML) regulatory standards.

However, to bolster their ID verification efforts they need to do more, and the best way is by improving customer contact data quality from the outset.

Why is contact data quality so important?

From our experience the quality of contact data is key to the effectiveness of ID processes, influencing everything from end-to-end fraud prevention to delivering simple ID checks; meaning more advanced and costly techniques, like biometrics and liveness authentication, may not be necessary.

When a customer’s contact information, such as name, address, email and phone number are accurate the verification process becomes more reliable. With this data ID verification technology can confidently cross-reference the provided information against official databases or other authoritative sources without discrepancies that could lead to false positives or negatives.

A big issue is that fraudsters often exploit inaccuracies in contact data to create false identities and manipulate existing ones. By maintaining clean and accurate contact data ID verification systems can more effectively detect suspicious activity and prevent fraud. For example, discrepancies in a user’s phone or email, or an address linked to multiple identities, could serve as a red flag for additional scrutiny. This basic capability is more important than ever as identity fraud becomes increasingly sophisticated.

Address verification is the foundation of contact data quality

Address verification – having a consistently accurate, standardised address – is usually recognised as the cornerstone of contact data quality. Once you have access to up-to-date customer addresses it makes it much easier to match and verify identities across multiple sources.

Therefore, verifying the accuracy and legitimacy of an individual’s address should be the first step in any identity related process, with any discrepancies between a claimed address and official records highlighting a potential fraudster.

By catching these inconsistencies early ID verification technology can help mitigate risks, ensuring only legitimate users are granted access to services, protecting both their business and customers from fraud. 

Address verification also plays an important role in regulatory compliance, by ensuring that the address information provided meets KYC and AML regulatory standards.

Phone and email verification

As I’ve already touched on it’s not all about having an accurate address, the role of phone and email verification is also vital as part of a comprehensive ID verification process, and therefore in preventing fraud. Particularly when it comes to helping organisations to identify and mitigate possible fraudulent activity early on. Verifying all three contact channels together contributes to enhanced security by filtering out fake or high-risk contact information, improving the accuracy of the ID verification process.

Email verification involves analysing various factors such as the age and history of the email address, the domain and syntax, and whether the email is temporary. After all, new and poorly formatted email addresses are often tell-tale signs of fraudsters. Furthermore, the association of a single email with multiple accounts could highlight criminal activity. It’s only by checking if an email address exists and works, then examining those elements I’ve already mentioned, that organisations can identify possible high-risk indicators.

Phone verification is equally important in fraud detection. By verifying the type and carrier of the phone number, organisations can identify high risk numbers, such as those associated with VoIP services, which are commonly used in fraudulent activities.

Checking the validity, activity and geolocation of a phone number also ensures it’s not only functional, but consistent with the user’s claimed location. And like with email, a single phone number linked to multiple accounts can indicate fraudulent behaviour. 

Deliver contact data accuracy with autocomplete / lookup tools  

The best way to obtain accurate customer contact data is to use autocomplete or lookup services.

With an address autocomplete tool it’s possible to deliver accurate address data in real-time by providing a properly formatted, correct address at the onboarding stage, when the user starts to input theirs. Tools such as these are very important because around 20 per cent of addresses entered online contain errors; these include spelling mistakes, wrong house numbers, and incorrect postcodes, as well as incorrect email addresses and phone numbers, typically due to errors when typing contact information. Another benefit of the service is the number of keystrokes required when entering an address is cut by up to 81 per cent. This speeds up the onboarding process and improves the whole experience.

Similar technology can be used to deliver first point of contact verification across email and phone, so these important contact datasets can also be verified in real-time.

In summary

The success of ID verification technology, and therefore fraud prevention, hinges on the accuracy and quality of customer contact data. Having such data not only enhances fraud detection, but improves the user experience and operational efficiency. Financial institutions must make sure that data verification tools are used across address, email and phone, alongside their ID verification technology.

Continue Reading

Business

Fortifying Email Security Beyond Microsoft

By Oliver Paterson, Director of Product Management, VIPRE Security Group

Most organisations today are Microsoft software houses. Microsoft 365 is the go-to productivity suite, offering comprehensive tools, flexible licensing, and built-in security features. Employees live and breathe in Outlook, and so many different technologies seamlessly integrate with this indispensable communication tool to deliver productivity gains to business professionals.

However, email-borne cyber threats continue to surge. Malware delivered via email is exponentially increasing. .eml attachments, which often get overlooked in phishing emails, are growing. Cybercriminals are resorting to email scams, alongside phishing emails, and with the arrival of generative AI technologies, users are increasingly finding it challenging to spot these “expertly” written, persuasive emails too. 

The reason for this growth in email-led attacks? Cybercriminals are exploiting the ubiquity of Microsoft – and indeed our trust in the software. It is no wonder that today Microsoft is the most spoofed URL.

Microsoft, a software powerhouse, but not an email specialist

Microsoft is undeniably a technology powerhouse, but its primary focus or specialty isn’t email security. Historically centered on infrastructure, operating systems, and cloud services, email security is a small part of its vast ecosystem. For example, while the company offers features like SafeLinks and SafeAttachments to protect against phishing scams, these are often limited to the priciest licenses. As a result, many organisations aren’t able to benefit from the depth of functionality that is needed for robust email protection.

The shortcomings of Microsoft’s security tiers

Microsoft offers a range of security packages for its Microsoft 365 and Office 365 suites, from E1 and E3 to the premium E5. While this tiered approach allows organisations to tailor licenses to employee roles, it also introduces vulnerabilities. Higher-tier subscriptions like E5 provide advanced security, but they’re costly. Lower-tier licenses often lack critical protections against impersonation and zero-day threats—gaps that cybercriminals eagerly exploit.

Furthermore, Microsoft’s user caps (e.g., 300 users on Business Premium) sometimes can lead organisations to make risky compromises in pursuit of cost savings. This mix-and-match strategy can result in blind spots, as lower-tier subscriptions typically lack advanced threat visibility tools, hampering investigation and response times.

Configuration conundrums

The Microsoft security portal, while comprehensive, is also complex. Take Link Protection (aka Microsoft SafeLinks) as an example. This feature needs enabling in multiple locations, and with Microsoft’s routine updates, these settings can be moved, altered, or even disabled by default. Such inadvertent misconfigurations not only pose security risks but also burden IT teams with constant vigilance and reconfiguration.

Static intelligence versus real-time threats

Microsoft’s reliance on third-party security feeds means its threat intelligence is often outdated. The company’s vast and complex platform requires time-consuming updates, and with email security being just one part of its portfolio, critical updates may not always be prioritised. A delay of even a day or two is all a zero-day attack needs to succeed.

A layered approach to email security

So what can organisations do? In an era where a single email can cripple a business, firms need to bolster Microsoft 365’s standard security. By understanding its limitations and layering on specialised protection, organisations can fortify their email defenses, with additional, advanced security capabilities, without breaking the bank. Due to the relentless onslaught of threat actors,  such caution is essential.

Capabilities such as Link Isolation and Sandboxing are vital today to protect against zero-day threats. Link Isolation renders malicious URLs harmless, while Sandboxing automatically isolates suspicious files in a virtual environment for safe analysis. These methods provide real-time monitoring and intelligence, enabling proactive defense.

No matter how advanced technology gets, it alone can’t solve everything. User awareness is key, and “in-the-moment” training trumps the typical periodic sessions for cybersecurity education. When users are immediately informed why an email or attachment was blocked, along with the telltale signs of malice, the lesson is more likely to stick.

Many organisations, and especially the smaller and growing firms, can’t afford top-tier Microsoft licenses for all employees or indeed maintain in-house IT teams to address the gaps in security capabilities. Partnering with third-party security services providers across different aspects of the function is a viable option as no single software or platform can provide all the security techniques and capabilities. This approach is not only more cost-effective but also provides the technological expertise needed for protection in today’s rapidly evolving threat landscape. Reducing reliance on a single security provider is an astute approach to minimising business risk.

Continue Reading

Business

The Impact of AI in the Fintech Industry: Enhancing the BNPL Experience

by Nada Ali Redha, Founder of PLIM Finance

Artificial Intelligence (AI) has transformed countless industries, and fintech is no exception. The evolution of AI technology is revolutionising how financial services operate, particularly in the Buy Now, Pay Later (BNPL) space. As the Founder and CEO of PLIM Finance—a BNPL service that specialises in the medical aesthetics industry—I have witnessed firsthand how AI can be leveraged to enhance both user experience and operational efficiency.

In the BNPL sector, AI and machine learning are essential tools for understanding and predicting consumer behaviour. BNPL providers often face the high-risk challenge of defaults, where consumers fail to make their scheduled payments. This is a critical issue for any BNPL provider, as defaults can impact the company’s profitability and reputation.

At PLIM Finance, we use AI-driven tools to manage defaults and failed payments. The power of AI in this context lies in its ability to learn from historical data and predict payment failures with remarkable accuracy. By analysing patterns in consumer spending, repayment behaviours, and other relevant factors, AI systems can forecast which payments are most likely to default. This predictive capability allows us to take proactive measures to manage and reduce defaults, safeguarding both our customers’ financial health and our own.

While we do not currently use AI to assess creditworthiness at PLIM Finance, AI’s potential in real-time risk assessment is unquestionable. Traditional credit assessment methods rely on static data, such as credit scores and income statements, which may not always reflect a consumer’s current financial situation. AI, however, can offer a more dynamic and holistic approach.

AI-driven systems can continuously analyse a variety of data sources, including transaction histories, spending patterns, and even social behaviours, to build a more comprehensive risk profile for each customer. This enables BNPL providers to make more informed lending decisions, tailoring financing options that align with each user’s ability to repay. Although PLIM has yet to implement AI in creditworthiness assessment, we recognise its potential to improve decision-making processes over traditional methods.

AI has a crucial role in combating fraud within the financial services sector, including BNPL platforms. Fraud detection is a multi-faceted challenge that requires constant vigilance and real-time analysis. AI is uniquely equipped to tackle this problem due to its capacity for processing vast amounts of data quickly and identifying suspicious patterns or anomalies that could indicate fraudulent activity.

At PLIM Finance, we leverage AI’s ability to apply collective data learning to make real-time decisions, thus reducing the likelihood of fraudulent activities going unnoticed. For instance, AI can detect unusual spending patterns or behaviours that deviate from a user’s normal financial activity, triggering alerts for further investigation. This proactive approach has proven to be highly effective in minimising financial losses and ensuring a safer environment for our users.

One of the most impactful benefits of AI in the BNPL space is the enhancement of customer engagement and satisfaction. AI allows companies to offer personalised, tailor-made services that resonate with each consumer’s specific needs. In the context of PLIM Finance, AI helps us recommend financing options based on individual preferences and past behaviours, streamlining the user’s journey.

Higher customer satisfaction often translates into increased loyalty and trust in the brand. By utilising AI to provide relevant recommendations and support, we can meet our customers where they are in their financial journey, helping them make informed decisions. This, in turn, creates a positive user experience that distinguishes our services from those of traditional lending institutions.

Despite its numerous benefits, implementing AI in BNPL services is not without challenges, especially concerning data privacy, algorithmic fairness, and transparency. One of the primary concerns in any AI application is bias in the data. AI systems learn from historical data, which may not be entirely representative of the diverse range of consumers who use BNPL services. Until we can source data from a wide variety of demographic and socioeconomic backgrounds, there is a risk that AI-driven decisions could inadvertently favour certain groups over others.

Transparency in AI decision-making is another ethical consideration. Customers need to trust that their data is being used responsibly and that AI algorithms are making fair, unbiased lending decisions. To address these concerns, it is crucial to maintain transparency about how AI models are built, what data they use, and how decisions are made. Additionally, complying with data privacy regulations, such as the General Data Protection Regulation (GDPR) in Europe, is essential to protect consumer rights.

AI’s role in the BNPL industry will continue to evolve as technology advances and more data becomes available. At PLIM Finance, we are excited about the future possibilities that AI presents, from more accurate risk assessment to enhancing customer satisfaction. By continuously improving our AI-driven tools and addressing the ethical challenges associated with their use, we aim to create a more inclusive, secure, and user-friendly BNPL experience.

In conclusion, the impact of AI in the fintech industry, particularly in the BNPL space, is profound. It offers solutions to key challenges, including managing defaults, fraud detection, and customer engagement, all while providing an opportunity to enhance the overall user experience. However, as we embrace these technological advancements, it is equally important to navigate the ethical concerns thoughtfully, ensuring that AI serves as a tool for positive financial inclusion.

Continue Reading

Copyright © 2021 Futures Parity.