Connect with us

Technology

A DATA-CENTRIC APPROACH TO AUTHORISING CUSTOMERS’ ONLINE TRANSACTIONS

Source: Finance Derivative

Shagun Varshney, Signifyd Senior Product Manager, Payment Solutions

As online shopping continues to grow, so too does the level of fraudulent orders. But often, the most costly and damaging part of fraud for merchants is not the fraud itself, but the valid customer orders that are mistaken for fraud and are rejected by the merchant or bank – research suggests around 30% of declined orders are false declines.

Merchants are constantly battling a double-edged sword between allowing orders to be processed that run the risk of being fraudulent, or declining orders that seem suspicious and end up damaging relationships with genuine customers. In the peak season, this becomes even more challenging as order volumes increase, along with fraudulent activity.

Against a backdrop of upcoming SCA regulation changes, supply chain issues and increasing customer demand in the lead up to Christmas, retailers can’t afford to lose transactions and damage relationships with customers.

This perfect storm calls for a new approach to risk management, where retail fraud teams focus on optimising business. For instance, bringing value by maximising the number of orders approved and facilitating the newer ecommerce channels, such as click-and-collect.

How the payment ecosystem works

Online payments have become so lightning-quick and seamless (for the most part) that it can be surprising to learn how many hoops a transaction has to jump through in order to be authorised and settled. As soon as a customer clicks “buy,”  a whole series of digital cogs begin to turn, each of which can put the brakes on a transaction. It begins with the payment gateway:

Payment gateway: Payment gateways are the card machines of the internet: when a customer clicks “buy” in your online store, they are taken to a payment gateway to enter their payment details. The payment gateway moves the cardholder and transaction information among the different players. And it lets the customer know whether the purchase has been authorised.

Acquirer: A bank that works for the merchant, processing credit card transactions by routing them through the networks run by card companies such as Mastercard or Visa to the cardholder’s bank, or issuer. Acquirers sometimes look to third parties to help with processing payments.

Credit card network: The acquiring bank and issuing bank communicate with one another via a credit card network. Visa and Mastercard are examples of credit card networks.

During a transaction, the credit card network will relay authorisation and settlement messages between the acquiring and issuing banks, charging a small fee to each. Some credit card networks are also issuing banks (e.g. American Express) but most are not.

Issuer: The issuing bank is the financial institution which provides the customer’s bank account or credit card. An issuing processor sits in front of the issuing bank and handles authorisation requests from the credit card network on its behalf. It then authorises and settles the transaction.

Why false declines occur

Banks and payment companies decline payments for a host of reasons, some of them quite reasonable. Most often a payment is turned down because a card’s credit limit isn’t sufficient to make the purchase. Transactions are also scotched if card information is entered incorrectly — say the CVV code offered is wrong — or if the card or information provided is outdated.

Payments are also declined to protect both the consumer and the merchant. If a bank believes a lost or stolen card is being used it will decline the transaction. Technical hiccups, such as an outage at the issuing banks can also cause a decline.

While protecting customers and merchants is all well and good, problems arise when banks mistake a good order for a fraudulent one. These payment rejections are referred to as false declines.

The good news is the majority of declines are not due to nefarious activity and are therefore recoverable. But maximising your authorisation rate – i.e. the percentage of customer payments you take which are approved and settled – can still be a real balancing act.

A data-centric approach to improving authorisation rates

  1. Provide more data. Large issuers such as Capital One and Amex have reported that submitting additional data from the merchant-side led to a 1% to 3% increase in authorisation rates and significantly reduced false declines. Providing more merchant-side data to issuer banks and payments companies gives them more evidence a transaction is legitimate.
  2. Use quality fraud tools. Effectively managing online fraud carries benefits beyond the obvious. Yes, merchants lose less revenue through bad orders and are able to confidently ship more good orders. And they also build a reputation with the financial institutions. Retailers that turn to highly effective machine learning and artificial intelligence driven solutions send cleaner traffic to the banks reinforcing the idea that their orders are highly likely to be legitimate. Conversely, retailers that send a relatively high percentage of fraudulent transactions to banks, will find those banks broadening the set of transactions they decline. It becomes something of a death spiral for revenue.
  3. Authenticate payments when required. Besides deploying innovative fraud solutions, European merchants need to be deliberate in the ways they authenticate customers in the era of PSD2 and strong customer authentication (SCA). The key to success rests in intelligently managing exemptions and exclusions when deciding the most efficient route meeting new payment regulations. Wisely relying on exemptions will allow a significant percentage of transactions to be exempted from SCA and will ensure that each individual customer is receiving the best customer experience available. Properly deploying exemptions and exclusion — which apply, for instance, based on the order value, the origin of the transaction, and a merchant’s fraud history — is a complicated prospect, but an ecosystem of providers has grown up to help with the challenge. Adding intelligent exemption tools goes hand-in-hand with relying on robust fraud protection solutions. Establishing a record of sending clean transactions to the banks will encourage them to become less conservative in authorising orders. High authorisation rates begetting high authorisation rates becomes a virtuous cycle.
  4. Accept digital wallets. Be discerning when selecting a payment service provider. For instance, be sure you’re able to accept Apple Pay, Google Pay and other digital wallets, as they require two-factor authentication and are more likely to pass fraud filters.
  5. Enable card account updater. Many payment processors can automatically update your customer’s card details if they expire or are renewed. Check with your processor to make sure they offer an account updater, and that it’s enabled.
  6. Payment Routing. Payment routing solutions analyse your particular payment ecosystem and use historical data to determine the transaction route which is most likely to result in a successful authorisation. This can be especially useful if your customers are from all over the world, and not based in just one country.

Being deliberate and thoughtful when it comes to building your authorisation optimisation strategy can make a real difference in the conversions you see every day. As importantly, taking the steps to increase authorisation provides your customers with a better shopping experience and a bigger incentive to visit your ecommerce store again and again.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Driving business success in today’s data-driven world through data governance

Source: Finance derivative

Andrew Abraham, Global Managing Director, Data Quality, Experian

It’s a well-known fact that we are living through a period of digital transformation, where new technology is revolutionising how we live, learn, and work. However, what this has also led to is a significant increase in data. This data holds immense value, yet many businesses across all sectors struggle to manage it effectively. They often face challenges such as fragmented data silos or lack the expertise and resources to leverage their datasets to the fullest.

As a result, data governance has become an essential topic for executives and industry leaders. In a data-driven world, its importance cannot be overstated. Combine that with governments and regulatory bodies rightly stepping up oversight of the digital world to protect citizens’ private and personal data. This has resulted in businesses also having to comply e with several statutes more accurately and frequently.

We recently conducted some research to gauge businesses’ attitudes toward data governance in today’s economy. The findings are not surprising: 83% of those surveyed acknowledged that data governance should no longer be an afterthought and could give them a strategic advantage. This is especially true for gaining a competitive edge, improving service delivery, and ensuring robust compliance and security measures.

However, the research also showed that businesses face inherent obstacles, including difficulties in integration and scalability and poor data quality, when it comes to managing data effectively and responsibly throughout its lifecycle.

So, what are the three fundamental steps to ensure effective data governance?

Regularly reviewing Data Governance approaches and policies

Understanding your whole data estate, having clarity about who owns the data, and implementing rules to govern its use means being able to assess whether you can operate efficiently and identify where to drive operational improvements. To do that effectively, you need the right data governance framework. Implementing a robust data governance framework will allow businesses to ensure their data is fit for purpose, improves accuracy, and mitigates the detrimental impact of data silos.

The research also found that data governance approaches are typically reviewed annually (46%), with another 47% reviewing it more frequently. Whilst the specific timeframe differs for each business, they should review policies more frequently than annually. Interestingly, 6% of companies surveyed in our research have it under continual review.

Assembling the right team

A strong team is crucial for effective cross-departmental data governance.  

The research identified that almost three-quarters of organisations, particularly in the healthcare industry, are managing data governance in-house. Nearly half of the businesses surveyed had already established dedicated data governance teams to oversee daily operations and mitigate potential security risks.

This strategic investment highlights the proactive approach to enhancing data practices to achieve a competitive edge and improve their financial performance. The emphasis on organisational focus highlights the pivotal role of dedicated teams in upholding data integrity and compliance standards.

Choose data governance investments wisely

With AI changing how businesses are run and being seen as a critical differentiator, nearly three-quarters of our research said data governance is the cornerstone to better AI. Why? Effective data governance is essential for optimising AI capabilities, improving data quality, automated access control, metadata management, data security, and integration.

In addition, almost every business surveyed said it will invest in its data governance approaches in the next two years. This includes investing in high-quality technologies and tools and improving data literacy and skills internally.  

Regarding automation, the research showed that under half currently use automated tools or technologies for data governance; 48% are exploring options, and 15% said they have no plans.

This shows us a clear appetite for data governance investment, particularly in automated tools and new technologies. These investments also reflect a proactive stance in adapting to technological changes and ensuring robust data management practices that support innovation and sustainable growth.

Looking ahead

Ultimately, the research showed that 86% of businesses recognised the growing importance of data governance over the next five years. This indicates that effective data governance will only increase its importance in navigating digital transformation and regulatory demands.

This means businesses must address challenges like integrating governance into operations, improving data quality, ensuring scalability, and keeping pace with evolving technology to mitigate risks such as compliance failures, security breaches, and data integrity issues.

Embracing automation will also streamline data governance processes, allowing organisations to enhance compliance, strengthen security measures, and boost operational efficiency. By investing strategically in these areas, businesses can gain a competitive advantage, thrive in a data-driven landscape, and effectively manage emerging risks.

Continue Reading

Technology

‘Aligning AI expectations with AI reality’

By Nishant Kumar Behl, Director of Emerging Technologies at OneAdvanced

AI is transforming the way we work now and will continue to make great strides into the future. In many of its forms, it demonstrates exceptional accuracy and a high rate of correct responses. Some people worry that AI is too powerful, with the potential to cause havoc on our socio-political and economic systems. There is a converse narrative, too, that highlights some of the surprising and often comical mistakes that AI can produce, perhaps with the intention of undermining people’s faith in this emerging technology.

This tendency to scrutinise the occasional AI mishap despite its frequent correct responses overshadows the technology’s overall reliability, creating an unfairly high expectation for perfection. With a singular focus on failure, it is, therefore, no surprise that almost 80% of AI projects fail within a year. Considering all of the hype around AI and particularly GenAI over the past few years, it is understandable that users feel short-changed when their extravagant expectations are not met.

We shouldn’t forget that a lot of the most useful software we all rely on in our daily working lives contains bugs. They are an inevitable and completely normal byproduct of developing and writing code. Take a look at the internet, awash with comments, forums, and advice pages to help users deal with bugs in commonly used Apple and Microsoft word processing and spreadsheet apps.

If we can accept blips in our workhorse applications, why are we holding AI to such a high standard? Fear plays a part here. Some may fear AI can do our jobs to a much higher standard than we can, sidelining us. No technology is smarter than humans. As technology gets smarter, it pushes humans to become smarter. When we collaborate with AI, the inputs of humans and artificial intelligence work together, and that’s when magic happens.

AI frees up more human time and lets us be creative, focusing on more fulfilling tasks while the technology does the heavy lifting. But AI is built by humans and will continue to need people asking the right questions and making connections based on our unique human sensibility and perception if it is to become more accurate, useful, and better serve our purpose.

The fear of failing to master AI implementation might be quite overwhelming for organisations. In some cases, people are correct in being cautious. There is a tendency now to expect all technology solutions to have integrated AI functionality for the sake of it, which is misguided. Before deciding on any technology, users must first identify and understand the problem they are trying to solve and establish whether AI is indeed the best solution. Don’t be blinded by science and adopt the whistles and bells that aren’t going to deliver the best results.

Uncertainty and doubt will continue to revolve around the subject of AI, but people should be reassured that there are many reliable, ethical technology providers developing safe, responsible, compliant AI-powered products. These organisations recognise their responsibility to develop products that offer long-term value rather than generating temporary buzz. By directly engaging with customers to understand their needs and problems, a customer-focused approach helps identify whether AI can effectively address the issues at hand before proceeding down the AI route.

In any organisation, the leader’s job is to develop strategy, ask the right questions, provide direction, and often devise action plans. When it comes to AI, we will all need to adopt that leadership mindset in the future, ensuring we are developing the right strategy, asking insightful questions, and devising an effective action plan that enables the engineers to execute appropriate AI solutions for our needs.

Organisations should not be afraid to experiment with AI solutions and tools, remembering that in every successful innovation, there will be some failure and frustration. The light bulb moments rarely happen overnight, and we must all adjust our expectations so that AI can offer a perfect solution. There will be bugs and problems, but the journey towards improvement will result in achieving long-term and sustainable value from AI, where everyone can benefit.

====

Nishant Kumar Behl is Director of Emerging Technologies at OneAdvanced, a leading provider of sector-focussed SaaS software, headquartered in the UK.

Continue Reading

Business

Machine Learning Interpretability for Enhanced Cyber-Threat Attribution

Source: Finance Derivative

By: Dr. Farshad Badie,  Dean of the Faculty of Computer Science and Informatics, Berlin School of Business and Innovation

This editorial explores the crucial role of machine learning (ML) in cyber-threat attribution (CTA) and emphasises the importance of interpretable models for effective attribution.

The Challenge of Cyber-Threat Attribution

Identifying the source of cyberattacks is a complex task due to the tactics employed by threat actors, including:

  • Routing attacks through proxies: Attackers hide their identities by using intermediary servers.
  • Planting false flags: Misleading information is used to divert investigators towards the wrong culprit.
  • Adapting tactics: Threat actors constantly modify their methods to evade detection.

These challenges necessitate accurate and actionable attribution for:

  • Enhanced cybersecurity defences: Understanding attacker strategies enables proactive defence mechanisms.
  • Effective incident response: Swift attribution facilitates containment, damage minimisation, and speedy recovery.
  • Establishing accountability: Identifying attackers deters malicious activities and upholds international norms.

Machine Learning to the Rescue

Traditional machine learning models have laid the foundation, but the evolving cyber threat landscape demands more sophisticated approaches. Deep learning and artificial neural networks hold promise for uncovering hidden patterns and anomalies. However, a key consideration is interpretability.

The Power of Interpretability

Effective attribution requires models that not only deliver precise results but also make them understandable to cybersecurity experts. Interpretability ensures:

  • Transparency: Attribution decisions are not shrouded in complexity but are clear and actionable.
  • Actionable intelligence: Experts can not only detect threats but also understand the “why” behind them.
  • Improved defences: Insights gained from interpretable models inform future defence strategies.

Finding the Right Balance

The ideal model balances accuracy and interpretability. A highly accurate but opaque model hinders understanding, while a readily interpretable but less accurate model provides limited value. Selecting the appropriate model depends on the specific needs of each attribution case.

Interpretability Techniques

Several techniques enhance the interpretability of ML models for cyber-threat attribution:

  • Feature Importance Analysis: Identifies the input data aspects most influential in the model’s decisions, allowing experts to prioritise investigations.
  • Local Interpretability: Explains the model’s predictions for individual instances, revealing why a specific attribution was made.
  • Rule-based Models: Provide clear guidelines for determining the source of cyber threats, promoting transparency and easy understanding.

Challenges and the Path Forward

The lack of transparency in complex ML models hinders their practical application. Explainable AI, a field dedicated to making models more transparent, holds the key to fostering trust and collaboration between human and machine learning. Researchers are continuously refining interpretability techniques, with the ultimate goal being a balance between model power and decision-making transparency.

Continue Reading

Copyright © 2021 Futures Parity.