Connect with us

Business

Building a Greener Web: Six Ways to Put Your Website on an Emissions Diet

By Roberta Haseleu, Practice Lead Green Technology at Reply, Fiorenza Oppici, Live Reply, and Lars Trebing, Vanilla Reply

Most people are unaware or underestimate the impact of the IT sector on the environment. According to the BBC: “If we were to rather crudely divide the 1.7 billion tonnes of greenhouse gas emissions estimated to be produced in the manufacture and running of digital technologies between all internet users around the world, it would mean each of us is responsible for 414kg of carbon dioxide a year.” That’s equivalent to 4.7bn people charging their smartphone 50,000 times.

Every web page produces a carbon footprint that varies depending on its design and development. This must be more closely considered as building an energy efficient website also increases loading speeds which leads to better performance and user experience.

Following are six practical steps developers can take to reduce the environmental impact of their websites.

  • Implement modularisation

With traditional websites that don’t rely on single page apps, each page and view of the site is saved in individual html files. The code only runs, and the data is only downloaded, for the page that the user is visiting, avoiding unnecessary requests. This reduces transmitted data volume and saves energy.

However, this principle is no longer the standard in modern web design which is dominated by single page apps which dynamically display all content to the user at runtime. This approach is easier and faster to code and more user-friendly but, without any precautions, it creates unnecessary overheads. In the worst case, accessing the homepage of a website may trigger the transmission of the entire code of the application, including parts that may not be needed.

Modularisation can help. By dividing the code of a website into different modules, i.e. coherent code sections, only the relevant code is referenced. Using modules offers distinct benefits: they keep the scope of the app clean and prevent ‘scope creeps’; they are loaded automatically after the page has been parsed but before the Document Object Model (DOM) is rendered; and, most importantly for green design, they facilitate ‘lazy loading’.

  • Adopt lazy loading

The term lazy loading describes a strategy of only loading resources at the moment they are needed. This way, a large image at the bottom of the page will not be loaded unless the user scrolls down to that section.

If a website only consists of a routing module and an app module which contain all views, the site will become very heavy and slow at first load. Smart modularisation, breaking down the site into smaller parts, in combination with lazy loading can help to load only the relevant content when the user is viewing that part of the page.

However, this should not be exaggerated either as, in some instances, loading each resource only in the last moment while scrolling can annihilate performance gains and result in higher server and network loads. It’s important to find the right balance based on a good understanding of how the app will be used in real life (e.g. whether users will generally rather continue to the next page after a quick first glance, or scroll all the way down before moving on).

  • Monitor build size

Slimming website builds is possible not only at runtime but also at a static level. Typically, a web app consists of a collection of different typescript files. To build a site and compile the code from typescript to JavaScript, a web pre-processor is used.

Pre-processors come with the possibility to prevent a build to complete if its files are bigger than a variable threshold. Limits can be set both for the main boot script as well as the single chunks of CSS to be no bigger than a specific byte size after compilation. Any build surpassing those thresholds fails with a warning.

If a build is suspiciously big, a web designer can inspect it and identify which module contributes the most, as well as all its interdependencies. This information allows the programmer to optimise the parts of the websites in question.

  • Eliminate unused code

One potential reason for excessive build sizes can be dozens of configuration files and code meant for scenarios that are never needed. Despite never being executed, this code still takes up bandwidth, thereby consuming extra energy.

Unused parts can be found in own source code but also (and often to a greater extent) in external libraries used as dependencies. Luckily, a technique called ‘tree shaking’ can be used to analyse the code and mark which parts are not referenced by other portions of the code.

Modern pre-processors perform ‘tree shaking’ to identify unused code but also to exclude it automatically from the build. This allows them to package only those parts of the code that are needed at runtime – but only if the code is modularised.

  • Choose external libraries wisely

One common approach to speed up the development process is by using external libraries. They provide ready-to-use utilities written and tested by other people. However, some of these libraries can be unexpectedly heavy and weigh your code down.

One popular example is Moment.js, a very versatile legacy library for handling international date formats and time zones. Unfortunately, it is also quite big in size. Most of all, it is neither very compatible with the typical TypeScript world nor is it modular. This way, also the best pre-processors cannot reduce the weight that it adds to the code by means of ‘tree shaking’.

  • Optimise content

Designs can also be optimised by avoiding excessive use of images and video material. Massive use of animation gimmicks such as parallax scrolling also has a negative effect. Depending on the implementation, such animations can massively increase the CPU and GPU load on the client. To test this, consider running the website on a 5 to 10-year-old computer. If scrolling is not smooth and/or the fans jump to maximum speed, this is a very good indication of optimisation potential.

The amount of energy that a website consumes — and thus its carbon footprint — depends, among other factors, on the amount of data that needs to be transmitted to display the requested content to users. By leveraging the six outlined techniques above, web designers can ‘slim’ their websites and contribute to the creation of a more sustainable web whilst boosting performance and user experience in the process.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Securing The Future of Cybersecurity

Source: Finance Derivative

Dominik Samociuk, PhD, Head of Security at Future Processing

When more than 6 million articles of ancestry and genetic data were breached from 23 and Me’s secure database, companies were forced to confront and evaluate their own cybersecurity practices and data management. With approximately 2.39 million instances of cybercrime experienced across UK businesses last year, the time to act is now.

If even the most secure and unsuspecting businesses aren’t protected, then every business should consider themselves, and operate as a target. As we roll into 2024, it is unlikely there will be a reduction in cases like these. It is expected there will be an uptick in the methods and levels of sophistication employed by hackers to obtain sensitive data – something that continues to increase as a high-ticket commodity.

In the next two years, it is predicted that the cost of cyber damage will grow by 15% yearly, reaching a peak of $10.5 trillion in 2025. We won’t be saying goodbye to ransomware in 2024, but rather saying hello to an evolved, automated, adaptable, and more intelligent form of it. But what else is expected to take the security industry by storm in the near future?

Offensive vs. Defensive Use of AI in Cybersecurity

Cybersecurity is a symbiotic cycle for companies. From attack to defence, an organisation’s security experts must be constantly defensive against malicious attacks. In 2024, there will be a rise in the use of Generative AI with an alarming 70% of workers using ChatGPT not making their employers aware – opening the door for significant security issues, especially for outsourced tasks like coding. And while its uses are groundbreaking, Gen AI’s misuses, especially when it comes to cybersecurity, are cause for concern.

Cybersecurity breaches will come from more sophisticated sources this year. As artificial intelligence (AI) continues to surpass development expectations, systems that can analyse and replicate humans are now being employed. With platforms like LOVO AI, and Deepgram making their way into mainstream use – often for hoax or ruse purposes – sinister uses of these platforms are being employed by cybercriminals to trick unsuspecting victims into disclosing sensitive network information from their business or place of work.

Cybercriminals target the weakest part of any security operation – the people – by encouraging them to divulge personal and sensitive information that might be used to breach internal cybersecurity. Further, Generative AI platforms like ChatGPT can  be used to automate the production of malicious code introduced internally or externally to the network. On the other hand, AI is being used to strengthen cybersecurity in unlikely ways. Emulating a cinematic cyber-future, AI can be used for the detection of malware and abnormal system/ or user activity to alert human operators. It can then equip staff with the tools and resources needed to respond in these instances.

Fatally, like any revolutionary platform, AI produces hazards and opportunities for misuse and exploitation. Seeing a rise in alarming cases of abuse, cybersecurity experts must consider the effect these might have before moving forward with an adaptable strategy for the year.

Data Privacy, Passkeys, and Targeting Small Businesses

Cybercriminals using their expertise to target small businesses is expected to increase in 2024. By nature, small businesses are unlikely to operate at a level able to employ the resources needed to combat consistent cybersecurity threats that larger organisations face on a daily basis. Therefore, with areas of cybersecurity unaccounted for, cybercriminals are likely to increasingly exploit vulnerabilities within small business networks.

They may also exploit the embarrassment felt by small business owners on occasions like these. If their data is being held for ransom, a small business owner, without the legal resources needed to fight (or tidy up) a data breach is more likely to give in to the demands of an attacker to save face, often setting them back thousands of pounds. Regular custom, loyalty, trust, and reputation makes or breaks a small business. Even the smallest data breaches can, in one fell swoop, lay waste to all of these.

Unlikely to have dedicated cybersecurity teams in place, a small business will often employ less secure and inexpensive data management solutions – making them prime targets. Contrary to expectations, in 2024, we will not say goodbye to the employment of ransomware. In fact, these tools are likely to become more common for larger, well-insured companies due to gold-rush on data harvesting.

Additionally, changing passwords will become a thing of the past. With companies like Apple beta-testing passkeys in consumer devices and even Google describing them as ‘the beginning of the end of the password’, businesses will no doubt begin to adopt this more secure technology, stored on local devices, for any systems that hold sensitive data. Using passwordless forms of identification mitigates issues associated with cyber criminals’ common method of exploiting personal information for unauthorised access.

Generative AI’s Impact on Information Warfare and Elections

In 2024, more than sixty countries will see an election take place, and as politics barrel towards all-out war in many, it is more important than ever to safeguard cybersecurity to account for a tighter grip on fact-checked information and official government communications. It is likely that we will see a steep rise in Generative AI supported propaganda on social media.

In 2016, amidst the heat of a combative and unfriendly US Presidential election, republican candidate Donald Trump popularised the term ‘Fake News’, which eight years later continues to plague realms of the internet in relation to ongoing global events. It was estimated that 25% of tweets sampled during this time, related to the election, contained links to intentionally misleading or false news stories in an attempt to further a viewpoint’s popularity. Online trust comes hand-in-hand with security, without one the other cannot exist.

While in 2016, the contemporary use of AI was extremely limited in today’s terms, what becomes of striking concern is the access members of the public have to platforms where, at will, they can legitimise a controversial viewpoint, or ‘fake news’ by generating video or audio clips of political figures, or quotes and news articles with a simple request. The ability to generate convincing text and media can significantly influence public opinion and sway electoral processes, destabilising a country’s internal and external cybersecurity.

Of greatest concern is the unsuspecting public’s inability to identify news generated by AI. Cornell University found that people were tricked into finding new false articles generated by AI credible over two-thirds of the time. Further studies found that humans were unable to identify articles written by ChatGPT beyond a level of random chance. As Generative AI’s sophistication increases, it will become ever more difficult to identify what information is genuine and safeguard online security. This is critical as Generative AI can now be used as ammunition in information warfare through the spread of hateful, controversial, and false propaganda during election periods.

In conclusion, the near future, like 2023, will see a great shift in focus toward internal security. A network is at its most vulnerable when the people who run it aren’t aligned in their strategies and values. Advanced technologies, like AI and ransomware, will continue to be a rising issue for the industry, and not only destabilise networks externally, but internally, too, as employees are unaware of the effects using such platforms might have.

Continue Reading

Business

Developing a personalised roadmap for implementing best practices in AI governance

Colin Redbond, SVP Product Strategy, SS&C Blue Prism

Whether its customer chatbots or digital workers improving workflow, daily use of Artificial Intelligence (AI) and Intelligent Automation (IA) is under scrutiny.

With automation rising, nations are grappling with the ethical, legal, and societal implications and crafting AI governance laws, requiring business leaders to also prepare for these far-reaching changes.

The proposed EU’s AI act – considered the world’s first comprehensive law safeguarding the rights of users – is expected to regulate the ever-evolving needs of AI application developers in the EU and beyond.

Transparency and authenticity significantly influence brand perception, particularly among Gen Z – 32% of the global population. With high expectations, they only support brands aligned with their values.

Banks, auditors, and insurers, and their supply chains – adept at meeting legislation like Europe’s GDPR and Sarbanes Oxley in the U.S. – necessitate a similar approach with AI. Governance will influence everything from governments, robot manufacturing and back-office apps to healthcare teams using AI for data extraction.

Non-compliance could be substantial, with the EU suggesting fines of €30 million or six percent of global annual turnover, whichever is higher, so identifying AI integration points, workflows and risks is vital.

SS&C Blue Prism, through industry collaboration, offers guidance on governance roadmaps, ensuring businesses are well-prepared to meet evolving requirements while leveraging AI effectively.

Need for immediate action
The legislation also scrutinises automations, ensuring compliance as they innovate automated tasks, BPM data analysis, and business-driven automations. IA, with its auditable digital trail, becomes an ideal vehicle, providing transparent insights into actions and decisions, safeguarding record-keeping and documentation – crucial across the AI lifecycle.

Establishing and maintaining AI governance also fosters ethical and transparent practices with executives to employees, ensuring compliance, security, and alignment with organisational values, including:

  • Top-down: Executive sponsorship ensures governance, data quality, security, and management, with accountability. An audit committee oversees data control, supported by a chief data officer.
  • Bottom-up: Individual teams take responsibility for the data security, modelling, and tasks they manage to ensure standardisation, scalability enabling scalability.
  • Modelling: Effective governance continuously monitors and updates performance to align with organisational goals, prioritising security in granting access.
  • Transparency: Tracking AI performance ensures transparency and aids in risk management, involving stakeholders from across the business.

Frameworks for AI governance
Though standards are evolving, disregarding governance risks data leakage, fraud, and privacy law breaches so compliance and standardisation must be prioritised.

Governments, companies, and academia are collaborating to establish responsible guidelines and frameworks. There are several real-world examples of AI governance that – while they differ in approach and scope – address the implications of artificial intelligence. Extracts from a few notable ones are here:

The EU’s GDPR – not exclusively focused on AI – includes data protection and privacy provisions related to AI systems. Additionally, the Partnership on AI and Montreal Declaration for Responsible AI – developed at the International Joint Conference on Artificial Intelligence – focus on research, best practices, and open dialogue in AI development.

Tech firms like Google, Microsoft, IBM, and Amazon have created AI ethics guidelines, emphasising social good, harm avoidance, and fairness, while some countries have developed national AI strategies including governance.

Canada’s “Pan-Canadian AI Strategy” prioritises responsible AI development for societal benefit, focusing on ethics, transparency, and accountability. Establishing governance in your organisation involves processes, policies, and practices for AI’s responsible development, deployment, and use.

Reach governance greatness in 14 Steps
Government and companies using AI must incorporate risk and bias checks in mandatory system audits. Alongside data security and forecasting, organisations can adopt strategic approaches to establish AI governance.

  • Development guidelines: Establish a regulatory framework and best practices for AI model development, including data sources, training, and evaluation techniques. Craft guidelines based on predictions, risks, and use cases.
  • Data management: Ensure that the data used to train and fine-tune AI models is accurate and compliant with privacy and regulatory requirements.
  • Bias mitigation: Incorporate ways to identify and address bias in AI models to ensure fair and equitable outcomes across different demographic groups.
  • Transparency: Require AI models to provide explanations for their decisions, especially in highly regulated sectors such as healthcare, finance and legal systems.
  • Model validation and testing: Conduct thorough validation and testing of AI models to ensure they perform as intended and meet quality benchmarks.
  • Monitoring: Continuously monitor AI model performance metrics, updating to meet changing needs and safety regulations. Due to generative AI’s novelty, maintain human oversight to validate quality and performance.
  • Version control: Keep track of the different versions of your AI models, along with their associated training data, configurations, and performance metrics so you can reproduce or scale them as needed.
  • Risk management: Implement security practices to protect AI models from cybersecurity attacks, data breaches and other security risks.
  • Documentation: Maintain documentation of the entire AI lifecycle, including data sources, testing, and training, hyperparameters and evaluation metrics.
  • Training and Awareness: Provide training to employees about AI ethics, responsible AI practices, and the potential societal impacts of AI technologies. Raise awareness about the importance of AI governance across the organisation.
  • Governance board: Establish a governance board or team overseeing AI model development, deployment and compliance with established guidelines that fit your goals. Crucially, involve all levels of the workforce — from leadership to employees working with AI — to ensure comprehensive and inclusive input.
  • Regular auditing: Conduct audits to assess AI model performance, algorithm regulation compliance and ethical adherence.
  • User feedback: Provide mechanisms for users and stakeholders to provide feedback on AI model behaviour and establish accountability measures in case of model errors or negative impacts.
  • Continuous improvement: Incorporate lessons learned from deploying AI models into the governance process to continuously improve the development and deployment practices.

AI governance demands continuous commitment from leadership, alignment with organisational values, and adaptability to technological and societal changes. A well-planned governance strategy is essential for organisations using automation, ensuring compliance.

Establishing safety regulations and governance policies is vital to maintaining the security, accuracy, and compliance of your data. These steps can help ensure your organisation develops and deploys AI responsibly and ethically.

Continue Reading

Business

How Alternative Assets and Data Boost Security in the Time of Market Uncertainty

Source: Finance Derivative

Author: Gediminas Rickevičius, VP Global Partnerships at Oxylabs

We live in interesting times, in both good and bad senses. While innovation drives our enthusiasm for the future, the restless geopolitical and Earth’s climate leave room for anxiety.

Interesting times are not always what investors would want. Market uncertainty means insecure positions for the portfolios composed of traditional assets like stocks and bonds. This is the time to look for alternative solutions. In 2024, alternative assets and alternative data are both set for another big year. It will be even bigger for those who effectively use these alternatives.

The growth spur of alternative assets

If we follow the conventional understanding of investment centered around stocks, bonds, and cash, alternative assets are inevitably placed in the periphery. As usual, however, the periphery is much broader than the center. Various asset classes, from the oldest ones, like commodities and real estate, to the newest ones, like cryptocurrency, are often deemed alternatives.

In institutional investment, alternatives have always been overshadowed by their more liquid and closely regulated traditional counterparts. However, alternative asset classes have been emerging out of this shadow recently. Comprising only 6% of the global investable market 15 years ago, they are expected to grow their share to 24% by 2025.

Alternative fund managers all over the world are projecting that 2024 will be another leap year, with around 85% expecting an increase in capital raising. Thus, alternative investment is still reaching its peak, which might be the perfect time to jump on board.

What drives the growth?

Investor faith in alternatives is reasonably driven by their recent performance. Over the last three years, Blackstone Group’s Alternative Assets Management (BAAM) unit outperformed the traditional 60-stocks/40-bonds portfolio by almost 12%.

These results align with historical trends and encourage investors already looking for ways to outride the stormy markets. According to J.P. Morgan, the main investor concerns in 2024 will be those where alternative assets are known to add support – diversification, hedging inflation, and alpha.

Increased retail investors’ access to alternative investments is another growth driver. The younger generations are the leading force in this regard. While older generations already have the financial cushion to patiently wait for long-term returns, Gen Z and Millennials seek to improve their current economic situation. That and feeling comfortable with new technology and financial instruments like cryptocurrency make young investors seek access to alternatives.

For these generations, social media plays a crucial role in investing. It influences them to choose riskier but more rewarding alternatives and serves as an alternative source of knowledge about investing.

Nevertheless, alternative investments are still dominated by the ultra-wealthy, who tend to have alternative assets making up 50% of their portfolios. Given their high stakes in alternative investments, one can be sure that these investors go far beyond social media when sourcing investment intelligence.

Alternative data in investment

Thinking about investing, many of us imagine investors researching stocks and bonds by looking at SEC filings, press releases, and financial statements. This is the traditional picture.

The alternative picture is bigger. Both alternative assets and alternative data have more types than their traditional counterparts. All data types outside the aforementioned official data sources are considered alternative data.

Often, alternative assets and data are also larger in volume. Real estate alone is the world’s largest asset class. Online real estate listings provide an extensive data source for meaningful insights into this investment vehicle. One can scrape these listings for price, location, description, and other crucial data. Another alternative data source, geolocation, provides information on movement patterns around the property. Thus, it is especially handy for investing in commercial real estate that depends on the number of potential customers passing by.

Similar ways of utilizing alternative data sources for investment intelligence are applicable to all types of alternative assets. For example, private equity investors can benefit from online job postings data and firmographics. Most importantly, large-scale alternative data collection is paramount for investors with diverse alternative asset portfolios.

Data opens the doors to diversification

Despite the clear advantages of diversification, not all investors and fund managers feel comfortable utilizing alternative assets for it. Some level of insecurity is understandable. To effectively use the versatility of alternative investments, one needs to understand many relatively unfamiliar markets.

Other major concerns related to such assets include higher fees and a tough time getting out of the investment once it is made. Higher initial investment and relative illiquidity are also among the main factors that historically made alternative assets available mostly, if not only, for the rich.

These concerns make access to alternative data a necessity. Only with versatile, accurate, and up-to-date information can investors securely invest in illiquid and costly alternatives. Many ultra-wealthy investors seem to agree since alternative data provider revenue is set to surpass that of traditional data providers before this decade ends.

Thus, access to premium data collection solutions is as important as larger financial capital in enabling the rich to dedicate a significant share of their portfolio to alternatives. The ultra-wealthy investors could just as well continue making their money from traditional investments. It is the certainty of having the best information available that encourages to look for diversification in often riskier and less regulated investments when stocks and bonds are underperforming.

However, although cutting-edge data-gathering technology might be more available to high-net-worth and institutional investors, alternative data can also advance the general democratization of investment. Retail investors can benefit from simple web scraping tools that allow a more systematic approach to investment research than adhering to sporadic advice from podcasts and social media posts.

Summing up

Alternative assets are generally only loosely correlated with traditional markets. Thus, in uncertain times such as these, alternative investments can provide a level of security that increases the portfolio’s health.

While still less familiar than mainstream investment, these assets are better understood and monitored with the help of alternative data sources. Despite being dubbed alternatives, many of these sources are, in fact, publicly available. One might consider this crucial investment intelligence a public secret of the rich.

As data extraction technology evolves, increasingly better solutions will be available to all investors, forcing us to reconsider whether it is time to rebrand alternatives as the universal vehicle for financial diversification.

Continue Reading

Copyright © 2021 Futures Parity.