Stuart Fuller, Domain Services Director at Com Laude
The internet has evolved in remarkable ways since its inception, transforming from a directory of static web pages in the early 90s to the interactive and immersive digital landscape everybody navigates today. Amidst these monumental shifts, the Domain Name System (DNS) – a critically important backbone of the web – has undergone transformative changes of its own.
In its nascent stages, the internet was envisioned as a far more linear place than its current iteration. Until 2000, many of the websites you could visit ended in .com, .edu, .gov, .mil, .org, .net, and .int, with all of these top-level domains inextricably tied to their owner’s function, in addition to the country code domains such as .uk and .fr. If you visited a .com, you’d see a commercial entity, with network infrastructures tied to .net domains and .org domains for those that didn’t quite fit. This is not true of the internet today, with over 1500 domains in use and .com, .net, and .org now being entirely unrestricted in who owns them, knowing the value of your domain has become more challenging for businesses in the online world.
These developments often go unrecognised, but with further change on the horizon announced by the DNS’ administrators, ICANN, it is time to take stock of just how far things have come, and consider what the service over 5 billion people use will look like in the years ahead.
How did we get here?
Prior to the 1990s, what would become the internet was predominantly restricted to academic researchers. Known as ARPANET, conceived by the U.S Department of Defence’s Advanced Research Projects Agency (ARPA), it was designed to facilitate research collaboration among universities and government entities. However, as the project yielded substantial developments of standardised protocols to enable communication on a network of computers, such as TCP/IP, it was the catalyst to a digital revolution that has shaped nearly every aspect of modern society.
During this period, an administrative organisation fulfilling technical functions for this ever-growing network was established by two scientists at the University of California at Los Angeles – John Postel and Joyce K. Reynolds. Yet as the internet was predominantly used by academic researchers, it was merely one part of a collaborative effort across universities to maintain the network.
However, as access grew throughout the 90s, the demand to commercialise the network and regulate it from government increased in tow. In 1993, the National Science Foundation, a U.S. Government Agency, privatised the domain name registry, followed by the authorisation of the sale of generic domain names in 1995. This resulted in widespread dissatisfaction across internet users – it signalled a concentration of power over what was previously envisioned as a decentralised system, whilst individual countries were free to develop their own rules and regulations determining the sale and usage of their specific country codes.
In response, Postel drafted a paper proposing the creation of new top-level domains, in a bid to institutionalise his organisation. After it was ignored, Postel emailed eight regional root system operators instructing them to change the central server they operated within to his organisation’s. They complied, dividing control of internet naming between Postel and the government.
With a furious reaction from government officials, Postel reversed the decision. Subsequently, changes were issued regarding authority over these root system servers, and Postel died unexpectedly a few months later.
Following this, his organisation was subsumed into the newly created ICANN, designed to perform the functions of Postel’s organisation. As the internet became global, this produced a renewed interest in fostering commercial competition and the number of domain names expanded dramatically.
As new demands came from how the internet was used, domain names were created to match. For example, with the introduction of internet access via mobile devices, .mobi was created, and when the Asia-Pacific region’s internet usage grew substantially, .asia was created in 2005. Large companies took notice of the value of these registered strings of characters, and in 2012 ICANN enabled businesses to apply for their own domain names. At present, 496 companies possess these, with examples ranging from .bmw for the automobile company all the way through to .sky for the television and broadband provider.
Recently, ICANN announced that there will be a second round of issuing brand names, currently pencilled in for 2026, presenting new opportunities for businesses to register their own piece of internet space. And, in a sense, Postel’s vision for a decentralised internet was realised, as in 2016 ICANN ended its contract with the U.S. government and the organisation transitioned to the global internet community.
Where is this all going?
Although it may be impossible to predict how the internet will be used in the future, and what structures may change to adapt, there are interesting technological developments that could be transformative.
With the rise of blockchain technologies, caused by the rocketing use of cryptocurrencies, we could see a further decentralisation with regard to system ownership. Instead of registering internet space with an authority consisting of a number of global stakeholders, blockchain systems can share ownership equally over every user, with potentially interesting, democratic implications for registering parts of that space.
Alternatively, with developments in metaverse technologies, we could see a new meaning applied to domain registration. As digital technologies and reality blur, this could mean staking claims over digital space on top of physical, or registering ownership over a rendered place in a virtual reality world.
An exciting future
Regardless of what the future brings, if history holds true, it will propel us toward a future where the boundaries of digital interaction are continually expanded and redefined. The evolution of the technology from an academic research tool to a fundamental part of people’s lives is nothing short of extraordinary. Yet, as these developments occur, they will undoubtedly bring new benefits in democratising information, entertainment, and connectivity, in a way that will shape the lives of everyone.
Social Engineering Tactics Are Evolving, Enterprises Must Keep Pace to Mitigate
By Jack Garnsey, Subject Matter Expert – Email Security, VIPRE Security Group
Social engineering attacks by cyber criminals are not only relentless, but they are rapidly evolving with new tactics being deployed too. However, phishing remains the preferred social engineering tactic. This is demonstrated by research that has processed nearly two billion emails. Of these, 233.9 million emails were malicious – showing that cybercriminals are increasingly adopting foul links that require ever more investigation to uncover. This is possibly because current signature-based investigation tools are now so effective and ubiquitous that threat actors are forced to either engineer a way around them or get caught.
Furthermore, the research detects these malicious emails due to content (110 million) and due to links (118 million) – almost evenly split between these. Following content and links, malicious emails were also discovered due to attachments, standing at 5.44 million.
Common approaches to social engineering
Criminals are using all manner of approaches for social engineering. They are using spam emails to fraud, especially business email compromise. With the use of AI technology such as ChatGPT and others, phishing emails are becoming even harder for people to identify. The tell-tale signs of poor sentence construction, spelling mistakes, lack of subject context and so on, no longer exist.
PDF attachments as an attack vector is gaining favour with criminals. Majority of devices and operating systems today have an integrated PDF reader. This universal compatibility across all platforms makes it an ideal weapon of choice for attackers looking to cast a wide net. One reason is because malicious hackers can make us think that there’s payment-related information inside. Once opened, the PDF potentially contains a link to a malicious page or releases malware on to the PC. Criminals are using malicious PDFs as a vehicle for QR codes too.
Stealing passwords is another commonplace phishing technique. Many of us will recognise emails urgently alerting us to update the password for the applications we use on a daily basis in our professional and personal lives. An example is a password update request from Microsoft – “Your Microsoft Office 365 password is set to expire today. Immediate action required – change or keep your current password.” In fact, Microsoft was the most spoofed name in Q3 of 2023.
Heard of callback phishing? Cybercriminals send an email to an unsuspecting employee, posing as a service or product provider. Instilling urgency, these emails prompt the individual to “call back” on a phone number. So, when the user calls them, they are duped out of their information over the phone, or they are given “sign in” links to verify information and end up losing sensitive data in the process. The absence of malicious files within the content of either the email or attachments makes it easier to slip past the radar and evade detection.
A relatively new trend that is gaining momentum is the utilisation of LinkedIn Slink for URL redirection. To allow its platform users to better promote their own ads or websites, LinkedIn introduced LinkedIn Slink (“smart link”). This “clean” LinkedIn URL enables users to redirect traffic directly to external websites while more easily tracking their ad campaigns. Clearly a useful feature, the problem is that these types of links slip through the net of many security protocols and so have become a favourite of social engineers.
Education, education, education
All hands on deck, the saying goes! In that vein, a comprehensive strategy is needed to ensure protection – from timely patching, archiving or backing up data, monitoring and auditing access controls and penetration testing through to properly configuring and monitoring email gateways and firewalls and phishing simulations.
However, underpinning all this must be regular security education and awareness training to ensure that employees are always up-to-date on knowledge and vigilant against the newest social engineering techniques that criminals are deploying to fraud them with. It helps to embed a cybersecurity conscious culture and security-first attitude in the workplace.
A key focus of the education and training programme must be on motivating employees to take an active role in threat detection and protection. Good cyber hygiene knowledge is about giving employees peace of mind that their organisation and job are secure, but also that they have the knowledge to protect their friends and loved ones.
Employees need regular training reinforcement throughout the year if they are to be expected to remember and apply best practices over this time. Single, annual courses or classroom sessions are not sufficient given that people forget training shortly after these sessions. If adult learning best practices and techniques, such as spaced learning, are not implemented as part of a security awareness training program, then it will not succeed.
Additionally, targeted training must be designed for role types – far too often, a broad-brush approach to cyber training and education is undertaken, making it a tick-box exercise. For example, a company’s risk and compliance team needs cyber training that takes into account the demands of regulatory bodies, business development teams need to know all about incident reporting, the product development department must be trained on how best to secure the software supply chain, security teams must be trained on advances in threat detection, end users must understand how to spot a phishing email or deepfake, and so forth. Training that is tailored specially for business leaders is equally important.
There is no end in sight when it comes to social engineering attacks. End users of technology are constantly under attack, vigilance supported by security education and knowledge to help intuitively spot social engineering is a critical defence – be that in the form of deceitful emails, malicious QR codes and links, or any other such techniques.
Navigating the Ethical Landscape: A Guide for Small Businesses Embracing AI Technologies
By Stefano Maifreni, COO and founder of Eggcelerate
Artificial intelligence (AI) technologies have revolutionised how businesses operate, offering countless benefits and opportunities for growth. From streamlining processes to improving customer experiences, AI has become an essential tool for small businesses. However, with these advancements come ethical challenges that must be addressed. This article will explore the ethical maze small companies face when leveraging AI technologies. We will delve into the key considerations they need to make to ensure they make ethical choices and maintain transparency throughout the process.
Ethical Considerations in AI Implementation
Embracing AI technologies responsibly can bring numerous benefits to small businesses. AI can automate repetitive tasks, freeing time for employees to focus on more complex and creative work. It can also enhance decision-making processes by analysing large amounts of data and generating valuable insights. Additionally, AI-powered chatbots and virtual assistants can improve customer service, providing prompt and personalised support. By adopting AI technologies responsibly, small businesses can increase efficiency, productivity, and customer satisfaction.
Ethical AI refers to the responsible and unbiased use of artificial intelligence technologies. It involves ensuring that AI systems are designed and deployed in a manner that respects human values, privacy, and fairness. Ethical AI also emphasises transparency and accountability, making it crucial for small businesses to align their AI practices with ethical principles.
When implementing AI technologies, small business owners must consider various ethical factors. One crucial consideration is the potential bias in AI algorithms. Machine learning models are trained on historical data, which can unintentionally reflect societal biases. Small businesses should ensure that their AI systems are audited and monitored for fairness. They should also strive to diversify their data sources to minimise biased outcomes and be transparent about their algorithmic decision-making processes.
Data privacy is another significant ethical concern. Small businesses must handle customer data with utmost care, ensuring it is collected and used in compliance with relevant regulations, such as the General Data Protection Regulation (GDPR). Businesses should obtain informed consent from users, clearly communicate how their data will be used, and implement robust security measures to safeguard sensitive information. Prioritising data privacy builds customer trust and demonstrates a commitment to ethical practices.
Transparency in AI decision-making is also crucial. Small businesses should communicate when AI is utilised and explain automated decisions whenever possible. This transparency helps build customer trust and ensures that AI is not perceived as a black box but as a tool that operates ethically and aligns with the business’s values.
Balancing Automation with Human Touch: Ensuring Ethical AI Practices
Small business owners should follow certain best practices to embrace AI technologies responsibly. Firstly, they should conduct a thorough ethical analysis before implementing any AI system. This analysis involves identifying potential risks and biases and considering the impact on stakeholders.
While AI provides automation and efficiency, balancing technology and the human touch is essential. Small businesses should consider the impact of AI on their workforce and ensure that AI systems do not replace employees but rather support them. It can involve upskilling employees to work collaboratively with AI technologies or reallocating resources to more meaningful tasks that require human ingenuity and empathy.
Small businesses should also establish clear guidelines and policies for AI usage, ensuring that employees understand the expected ethical standards.
Regular and ongoing training is crucial for employees involved in AI implementation. It ensures they are equipped with the knowledge and skills to navigate the ethical complexities of AI. Small businesses should also encourage a culture of transparency and accountability, where employees feel comfortable raising ethical concerns and discussing potential biases or risks associated with AI technologies.
Building Trust and Transparency with Customers through Ethical AI
Ethical AI practices play a significant role in building trust and transparency with customers. Small businesses should communicate their AI usage and data practices to customers in a clear and accessible manner through privacy policies, consent forms, and public statements that outline the steps taken to protect customer data and ensure ethical AI practices.
Additionally, small businesses should be responsive to customer concerns and feedback related to AI technologies. Addressing customer questions and providing mechanisms for redress decisions can foster trust and demonstrate a commitment to ethical practices. Small businesses can differentiate themselves in the market by prioritising transparency and building long-lasting customer relationships.
Resources for Small Business Owners to Learn About Ethical AI
To navigate the ethical complexities of AI, small business owners can access various resources. Online courses and tutorials, such as those offered by universities and technology companies, provide valuable insights into ethical AI practices. Industry-specific conferences and webinars offer opportunities to learn from experts and share experiences with other small business owners.
Furthermore, engaging with professional organisations and communities focused on ethical AI can provide a supportive network for small business owners. These communities often offer forums for discussion, access to research papers, and guidance on best practices. By actively seeking out resources and staying updated on the latest developments in ethical AI, small business owners can make informed decisions and drive positive change within their organisations.
Conclusion: Embracing Ethical AI for a Sustainable Future
As AI technologies continue to evolve, small businesses must navigate the ethical maze to embrace these technologies responsibly. By understanding the ethical implications of AI algorithms, prioritising data privacy, and following best practices, small businesses can leverage AI to drive growth and innovation while maintaining transparency and accountability.
Building trust and transparency with customers through ethical AI practices is essential for small businesses to succeed in the long run. Small business owners can ensure they are making ethical choices and fostering a sustainable future by considering the impact on stakeholders, balancing automation with the human touch, and integrating ethics into AI-driven decision-making.
With the right mindset, resources, and commitment to ethical AI, small businesses can embrace AI technologies’ potential and become ethical leaders in their respective industries. Let us navigate the ethical maze together and unlock the true potential of AI for a better future.
Top four compliance trends to watch in 2024
Robert Houghton, founder and CTO at Insightful Technology, discusses the top trends financial institutions should look out for in 2024
As financial institutions gear up for the next 12 months, it’s time to reflect on the key trends and developments, considering how they’ll shape the year ahead. While the financial landscape is constantly evolving, we believe there are four important issues that will shape 2024:
- AI-powered compliance
AI and automation are the buzzwords of the year, significantly changing our work landscape and set to transform our economy and social norms.
Take generative AI. It’s a game-changer for financial institutions in streamlining and securing compliance processes. Imagine a trading floor where every call and message is monitored. Certain phrases or words will trigger an automated alert to the compliance team.
These alerts are typically sorted into three tiers of concern. A low-level alert might be triggered by a trader swearing in a conversation. These are common occurrences that result in hundreds of daily alerts, usually reviewed manually by offshore companies. This traditional review process is time consuming, open to mistakes and inconsistent.
Enter generative AI, with its dual capabilities. Firstly, it can spot the misdemeanour in real-time. Secondly, it can understand the context to see what risk it poses. If someone swore, perhaps because they were quoting a strongly worded news story, it’s not a risk. Generative AI can tell the difference between that and someone using bad language in anger, thereby reducing false positives.
In the coming year, financial institutions will look at how these AI and automated decision-making processes can be explained, recorded and saved. However, it can’t be a “black box” that holds the fate of a trader within. By creating an audit-friendly trail, businesses will improve their chances of avoiding regulator penalties.
- A smarter monitoring for the hybrid era
In today’s work-anywhere culture, monitoring employees in regulated sectors like finance is key to managing risk.
But it shouldn’t turn into a nine-to-five spying game. It’s a mistake to treat remote work as if it mirrors an office setting; this can damage trust. Instead, monitoring should aim to understand work patterns, just like a heart monitor detects irregularities, signalling when the compliance team should take notice.
In the shadow of US banks facing over $2 billion in fines for unchecked use of private messaging and personal devices[i] – with the SEC imposing a $125 million penalty earlier this year[ii] – the UK’s financial sector is on alert. The Financial Conduct Authority (FCA) is already looking into the matter and questioning banks about their use of private messaging, as the watchdog decides whether to launch a full probe.
Financial firms must be proactive and ensure their Risk Review includes the provision to document all comms channels used by those personnel affected. From there clearly documented policies can be presented to the regulator. Ensuring review of those policies are adhered to is an important part of the process. They need to maintain a delicate balance between productivity, employee wellbeing and strict adherence to regulations in the hybrid work era.
- A demand for transparency
In 2023, the banking sector was shaken by a wave of raids on giants like Société Générale, BNP Paribas and HSBC, as part of a large tax fraud probe in Europe[iii]. Over $1 billion in fines looms as the industry, still shaky from significant bank failures[iv], faces a heightened demand for transparency.
Investigations into banks are essential when there’s evidence of criminal conduct, but they must be conducted with care. Raids can expose vast amounts of sensitive data, affecting not just the banks but numerous customers. Authorities should focus only on specific accounts with clear signs of foul play rather than casting a wide net.
The Paris raids, in particular, were an alarming example of the vulnerability of personal information, with hundreds of thousands of accounts indiscriminately scrutinised. These raids have set a dangerous standard where privacy is secondary.
As we approach next year, the financial sector must grapple with this double-edged sword: pursuing fraudsters while safeguarding individual privacy. The presumption of guilt will remain rife in the financial sector, however this will be partnered with a call for institutions and regulators to be more transparent with their activity.
- A one source advantage
As the stakes for non-compliance get higher and regulators scrutinise financial firms more closely, institutions will need to have a more comprehensive and centralised approach to data integrity, management, access and risk management. This includes creating ‘one source’ of data, which is a single, authoritative source of truth that can be used for risk analysis, proof of adherence to policy, reporting, analysis, and compliance.
Currently, many banks have a patchwork of data silos, which are collections of information that are not easily accessible to the bank because they are recorded or stored differently. This makes it difficult for banks to get a complete picture of their data and to comply with regulations.
Historically, this would have required a significant investment in time and resources. Now, modern solutions simplify this process, offering a more efficient path. In the long run, having a single source of data will help banks reduce costs, improve compliance and make better decisions. For those who continue to turn a blind eye to this issue will face financial penalties, operational risks, and irreparable reputational damage.
Overall, it’s safe to say that 2024 will continue to prove the need to strike a balance between technology and people in a bid for demonstratable compliance. As financial institutions prepare, they must consider their position on the work-anywhere spectrum, and how the combination of AI, automation, transparency and leadership can ensure they’re living by the letter of the law next year.
Roll on 2024!