Connect with us

Technology

During COP26, Facebook served ads with climate falsehoods, skepticism

Source: Reuters

Nov 18 (Reuters) – Facebook advertisers promoted false and misleading claims about climate change on the platform in recent weeks, just as the COP26 conference was getting under way.

Days after Facebook’s vice president of global affairs, Nick Clegg, touted the company’s efforts to combat climate misinformation in a blog as the Glasgow summit began, conservative media network Newsmax ran an ad on Facebook (FB.O) that called man-made global warming a “hoax.”

The ad, which had multiple versions, garnered more than 200,000 views. In another, conservative commentator Candace Owens said, “apparently we’re just supposed to trust our new authoritarian government” on climate science, while a U.S. libertarian think-tank ran an ad on how “modern doomsayers” had been wrongly predicting climate crises for decades.

Newsmax, Owens and the Daily Wire, which paid for the ad from Owens’s page, did not respond to requests for comment.

Facebook, which recently changed its name to Meta, does not have a specific policy on climate misinformation in ads or unpaid posts. Alphabet’s (GOOGL.O) Google said last month it would no longer allow ads that contradict scientific consensus on climate change on YouTube and its other services, though it would allow content that discusses false claims.

Facebook generally does not remove misinformation in posts unless it determines they pose imminent real-world harm, as it did for falsehoods around COVID-19. The company says it demotes posts ranked as false by its third-party fact-checkers (of which Reuters is one) and prohibits ads with these debunked claims. It says advertisers that repeatedly post false information may face restrictions on their ability to advertise on Facebook. It exempts politicians’ ads from fact-checks.

Asked about ads pushing climate misinformation, a company spokesperson said in a statement: “While ads like these run across many platforms, Facebook offers an extra layer of transparency by requiring them to be available to the public in our Ad Library for up to seven years after publication.”

UK-based think-tank InfluenceMap, which identified misleading Facebook ads run from several media outlets and think-tanks around COP26, also found fossil fuel companies and lobbying groups spent $574,000 on political and social issue Facebook ads during the summit, resulting in more than 22 million impressions and including content that promoted their environmental efforts in what InfluenceMap described as “greenwashing.”

One ad paid for by the American Petroleum Institute panned over a natural landscape as it touted its efforts to tackle climate change, while BP America ran an ad detailing its support for climate-friendly policies in neon green writing.

“Our social media posts represent a small fraction compared to the robust investments our companies make every day,” the API said in a statement, saying the natural gas and oil industry was committed to lowering emissions. BP said in a statement that it was “actively advocating for policies that support net zero, including carbon pricing, through a range of transparent channels, including social media advertising.”

Facebook has started adding informational labels to posts about climate change to direct users to its Climate Science Center, a new hub with facts and quizzes which it says is visited by more than 100,000 people a day.

Asked in an interview aired this week at the Reuters Responsible Business USA 2021 event where he thought Facebook still fell short on climate issues, Chief Technology Officer Mike Schroepfer said, “Obviously, there’s been concern about people sharing misinformation about climate on Facebook.”

“I’m not going to say we have it right at any moment in time,” he said. “We continually reevaluate what the state of the world is and what is our role, which starts with trying to allow people free expression, and then intervening when there are harms happening that we can prevent.”

He did not directly answer why Facebook had not banned all climate misinformation ads but said it “didn’t want people to profit over misinformation.”

EMPLOYEES QUESTION POLICY

The company’s approaches to climate misinformation and skepticism have caused employee debate. Discussions on its internal message board show staff sparring over how it should handle climate misinformation and flagging instances of it on the platform, such as in a January post where an employee said they found “prominent results of apparent misinformation” when they searched for climate change in its video ‘Watch’ section.

The documents were among a cache of disclosures made to the U.S. Securities and Exchange Commission and Congress by whistleblower Frances Haugen, a former Facebook product manager who left in May. Reuters was among a group of news organizations able to view the documents.

In the comments on an April post highlighting Facebook’s commitment to reducing its own environmental impact, including by reaching net zero emissions for its global operations last year, one staff member asked if the company could start classifying and removing climate misinformation and hoaxes from its platforms.

Two external researchers working with Facebook on its climate change efforts told Reuters they would like to see the company approach climate misinformation with the same proactiveness it has for COVID-19, which Facebook cracked down on during the pandemic.

“It does need to be addressed with the same level of urgency,” said John Cook, a postdoctoral research fellow at the Climate Change Communication Research Hub at Monash University who is advising Facebook on its climate misinformation work. “It is arguably more dangerous.”Reporting by Elizabeth Culliford; Editing by Kenneth Li and Nick Zieminski

Our Standards: The Thomson Reuters Trust Principles.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Driving Business Transformation Through AI Adoption – A Roadmap for 2024

Author: Edward Funnekotter, Chief Architect and AI Officer at Solace

From the development of new products and services, to the establishment of competitive advantages, Artificial intelligence (AI) can fundamentally reshape business operations across industries. However, each organisation is unique and as such navigating the complexities of AI, while applying the technology in an efficient and effective way, can be a challenge.

To unlock the transformational potential of AI in 2024 and integrate it into business operations in a seamless and productive way, organisations should seek to follow these five essential steps:

  • Prioritise Data Quality and Quantity

Usefulness of AI models is directly correlated to the quantity and quality of the data used to train them, necessitating effective integration solutions and strong data governance practices. Organisations should seek to implement tools that provide a wealth of clean, accessible and high-quality data that can power quality AI.

Equally, AI systems cannot be effective if an organisation has data silos. These impede the ability for AI to digest meaningful data, and then provide the insights that are needed to drive business transformation. Breaking down data silos needs to be a business priority – with investment in effective data management, and an application of effective data integration solutions.

  • Develop your own unique AI platform

The development of AI applications can be a laborious process, impacting the value that businesses are gaining from them in the immediate term. This can be expedited by platform engineering, which modernises enterprise software delivery to facilitate digital transformation, optimising developer experience and accelerating the ability to deliver customer value for product teams. The use of platform engineering offers developers pre-configured tools, pre-built components and automated infrastructure management, freeing them up to tackle their main objective; building innovative AI solutions faster.

While the development of AI applications that can help streamline infrastructure, automate tasks, and provide pre-built components for developers is the end goal, it’s only possible if the ability to design and develop is there in the first place. Gartner’s prediction that Platform Engineering will come of age in 2024 is a particularly promising update.

  • Put business objectives at the heart of AI adoption – can AI deliver?

Any significant business change needs to be managed strategically, and with a clear indication of the aims and benefits they will bring. While a degree of experimentation is always necessary to drive business growth, these shouldn’t be at the expense of operational efficiency.

Before onboarding AI technologies, look internally at the key challenges that your business is facing and question “how can AI help to address this?” You may wish to enhance the customer experience, streamline internal processes or use AI systems to optimise internal decision-making. Be sure the application of AI is going to help, not hinder you on this journey

Also remember that AI remains in its infancy, and cannot be relied upon as a silver bullet for all operational challenges. Aim to build a sufficient base knowledge of AI capabilities today, and ensure these are contextualised within your own business requirements. This ensures that AI investments aren’t made prematurely, providing an unnecessary cost.

  1. Don’t be limited by legacy systems

Owing to the complex mix of legacy and/or siloed systems that organisations employ, they may be restricted in their ability to use real-time and AI-driven operations to drive business value. For example, IDC found that only 12% of organisations connect customer data across departments.

Amidst the ‘AI data rush’ there will be a greater need for event-driven integration, however, only an enterprise architecture pattern will ensure new and legacy systems are able to work in tandem. Without this, organisations will be prevented from offering seamless, real-time digital experiences, linking events across departments, locations, on-premises systems, IoT devices, in a cloud or even multi-cloud environment.

  • Leverage real-time technology

Keeping up with the real-time demands of AI can pose a challenge for legacy data architectures used by many organisations. Event mesh technology – an approach to distributed networks that enable real-time data sharing and processing – is a proven way of reducing these issues. By applying event-driven architecture (EDA), organisations can unlock the potential of real-time AI, with automated actions and informed decision making using relevant insights and automated actions.

By applying AI in this way, businesses can offer stronger, more personalised experiences – including the delivery of specialised offers, real-time recommendations and tailored support based on customer requirements. An example of this is in predictive maintenance, in which AI is able to analyse and anticipate future problems or business-critical failures, ahead of them affecting operations, and dedicate the correct resources to fix the issue, immediately. By implementing EDA as a ‘central nervous system’ for your data, not only is real-time AI possible, but adding new AI agents becomes significantly easier.

Ultimately, AI adoption needs to be strategic, avoiding chasing trends and focusing instead on how and where the technology can deliver true business value. Following the steps above, organisations can ensure they are leveraging the full transformative benefit of AI and driving business efficiency and growth in a data driven era.

AI can be a highly effective tool. However, its success is dependent on how it is being applied by organisations, strategically,  to meet clearly defined and specific business goals.

Continue Reading

Business

Securing The Future of Cybersecurity

Source: Finance Derivative

Dominik Samociuk, PhD, Head of Security at Future Processing

When more than 6 million articles of ancestry and genetic data were breached from 23 and Me’s secure database, companies were forced to confront and evaluate their own cybersecurity practices and data management. With approximately 2.39 million instances of cybercrime experienced across UK businesses last year, the time to act is now.

If even the most secure and unsuspecting businesses aren’t protected, then every business should consider themselves, and operate as a target. As we roll into 2024, it is unlikely there will be a reduction in cases like these. It is expected there will be an uptick in the methods and levels of sophistication employed by hackers to obtain sensitive data – something that continues to increase as a high-ticket commodity.

In the next two years, it is predicted that the cost of cyber damage will grow by 15% yearly, reaching a peak of $10.5 trillion in 2025. We won’t be saying goodbye to ransomware in 2024, but rather saying hello to an evolved, automated, adaptable, and more intelligent form of it. But what else is expected to take the security industry by storm in the near future?

Offensive vs. Defensive Use of AI in Cybersecurity

Cybersecurity is a symbiotic cycle for companies. From attack to defence, an organisation’s security experts must be constantly defensive against malicious attacks. In 2024, there will be a rise in the use of Generative AI with an alarming 70% of workers using ChatGPT not making their employers aware – opening the door for significant security issues, especially for outsourced tasks like coding. And while its uses are groundbreaking, Gen AI’s misuses, especially when it comes to cybersecurity, are cause for concern.

Cybersecurity breaches will come from more sophisticated sources this year. As artificial intelligence (AI) continues to surpass development expectations, systems that can analyse and replicate humans are now being employed. With platforms like LOVO AI, and Deepgram making their way into mainstream use – often for hoax or ruse purposes – sinister uses of these platforms are being employed by cybercriminals to trick unsuspecting victims into disclosing sensitive network information from their business or place of work.

Cybercriminals target the weakest part of any security operation – the people – by encouraging them to divulge personal and sensitive information that might be used to breach internal cybersecurity. Further, Generative AI platforms like ChatGPT can  be used to automate the production of malicious code introduced internally or externally to the network. On the other hand, AI is being used to strengthen cybersecurity in unlikely ways. Emulating a cinematic cyber-future, AI can be used for the detection of malware and abnormal system/ or user activity to alert human operators. It can then equip staff with the tools and resources needed to respond in these instances.

Fatally, like any revolutionary platform, AI produces hazards and opportunities for misuse and exploitation. Seeing a rise in alarming cases of abuse, cybersecurity experts must consider the effect these might have before moving forward with an adaptable strategy for the year.

Data Privacy, Passkeys, and Targeting Small Businesses

Cybercriminals using their expertise to target small businesses is expected to increase in 2024. By nature, small businesses are unlikely to operate at a level able to employ the resources needed to combat consistent cybersecurity threats that larger organisations face on a daily basis. Therefore, with areas of cybersecurity unaccounted for, cybercriminals are likely to increasingly exploit vulnerabilities within small business networks.

They may also exploit the embarrassment felt by small business owners on occasions like these. If their data is being held for ransom, a small business owner, without the legal resources needed to fight (or tidy up) a data breach is more likely to give in to the demands of an attacker to save face, often setting them back thousands of pounds. Regular custom, loyalty, trust, and reputation makes or breaks a small business. Even the smallest data breaches can, in one fell swoop, lay waste to all of these.

Unlikely to have dedicated cybersecurity teams in place, a small business will often employ less secure and inexpensive data management solutions – making them prime targets. Contrary to expectations, in 2024, we will not say goodbye to the employment of ransomware. In fact, these tools are likely to become more common for larger, well-insured companies due to gold-rush on data harvesting.

Additionally, changing passwords will become a thing of the past. With companies like Apple beta-testing passkeys in consumer devices and even Google describing them as ‘the beginning of the end of the password’, businesses will no doubt begin to adopt this more secure technology, stored on local devices, for any systems that hold sensitive data. Using passwordless forms of identification mitigates issues associated with cyber criminals’ common method of exploiting personal information for unauthorised access.

Generative AI’s Impact on Information Warfare and Elections

In 2024, more than sixty countries will see an election take place, and as politics barrel towards all-out war in many, it is more important than ever to safeguard cybersecurity to account for a tighter grip on fact-checked information and official government communications. It is likely that we will see a steep rise in Generative AI supported propaganda on social media.

In 2016, amidst the heat of a combative and unfriendly US Presidential election, republican candidate Donald Trump popularised the term ‘Fake News’, which eight years later continues to plague realms of the internet in relation to ongoing global events. It was estimated that 25% of tweets sampled during this time, related to the election, contained links to intentionally misleading or false news stories in an attempt to further a viewpoint’s popularity. Online trust comes hand-in-hand with security, without one the other cannot exist.

While in 2016, the contemporary use of AI was extremely limited in today’s terms, what becomes of striking concern is the access members of the public have to platforms where, at will, they can legitimise a controversial viewpoint, or ‘fake news’ by generating video or audio clips of political figures, or quotes and news articles with a simple request. The ability to generate convincing text and media can significantly influence public opinion and sway electoral processes, destabilising a country’s internal and external cybersecurity.

Of greatest concern is the unsuspecting public’s inability to identify news generated by AI. Cornell University found that people were tricked into finding new false articles generated by AI credible over two-thirds of the time. Further studies found that humans were unable to identify articles written by ChatGPT beyond a level of random chance. As Generative AI’s sophistication increases, it will become ever more difficult to identify what information is genuine and safeguard online security. This is critical as Generative AI can now be used as ammunition in information warfare through the spread of hateful, controversial, and false propaganda during election periods.

In conclusion, the near future, like 2023, will see a great shift in focus toward internal security. A network is at its most vulnerable when the people who run it aren’t aligned in their strategies and values. Advanced technologies, like AI and ransomware, will continue to be a rising issue for the industry, and not only destabilise networks externally, but internally, too, as employees are unaware of the effects using such platforms might have.

Continue Reading

Business

Developing a personalised roadmap for implementing best practices in AI governance

Colin Redbond, SVP Product Strategy, SS&C Blue Prism

Whether its customer chatbots or digital workers improving workflow, daily use of Artificial Intelligence (AI) and Intelligent Automation (IA) is under scrutiny.

With automation rising, nations are grappling with the ethical, legal, and societal implications and crafting AI governance laws, requiring business leaders to also prepare for these far-reaching changes.

The proposed EU’s AI act – considered the world’s first comprehensive law safeguarding the rights of users – is expected to regulate the ever-evolving needs of AI application developers in the EU and beyond.

Transparency and authenticity significantly influence brand perception, particularly among Gen Z – 32% of the global population. With high expectations, they only support brands aligned with their values.

Banks, auditors, and insurers, and their supply chains – adept at meeting legislation like Europe’s GDPR and Sarbanes Oxley in the U.S. – necessitate a similar approach with AI. Governance will influence everything from governments, robot manufacturing and back-office apps to healthcare teams using AI for data extraction.

Non-compliance could be substantial, with the EU suggesting fines of €30 million or six percent of global annual turnover, whichever is higher, so identifying AI integration points, workflows and risks is vital.

SS&C Blue Prism, through industry collaboration, offers guidance on governance roadmaps, ensuring businesses are well-prepared to meet evolving requirements while leveraging AI effectively.

Need for immediate action
The legislation also scrutinises automations, ensuring compliance as they innovate automated tasks, BPM data analysis, and business-driven automations. IA, with its auditable digital trail, becomes an ideal vehicle, providing transparent insights into actions and decisions, safeguarding record-keeping and documentation – crucial across the AI lifecycle.

Establishing and maintaining AI governance also fosters ethical and transparent practices with executives to employees, ensuring compliance, security, and alignment with organisational values, including:

  • Top-down: Executive sponsorship ensures governance, data quality, security, and management, with accountability. An audit committee oversees data control, supported by a chief data officer.
  • Bottom-up: Individual teams take responsibility for the data security, modelling, and tasks they manage to ensure standardisation, scalability enabling scalability.
  • Modelling: Effective governance continuously monitors and updates performance to align with organisational goals, prioritising security in granting access.
  • Transparency: Tracking AI performance ensures transparency and aids in risk management, involving stakeholders from across the business.

Frameworks for AI governance
Though standards are evolving, disregarding governance risks data leakage, fraud, and privacy law breaches so compliance and standardisation must be prioritised.

Governments, companies, and academia are collaborating to establish responsible guidelines and frameworks. There are several real-world examples of AI governance that – while they differ in approach and scope – address the implications of artificial intelligence. Extracts from a few notable ones are here:

The EU’s GDPR – not exclusively focused on AI – includes data protection and privacy provisions related to AI systems. Additionally, the Partnership on AI and Montreal Declaration for Responsible AI – developed at the International Joint Conference on Artificial Intelligence – focus on research, best practices, and open dialogue in AI development.

Tech firms like Google, Microsoft, IBM, and Amazon have created AI ethics guidelines, emphasising social good, harm avoidance, and fairness, while some countries have developed national AI strategies including governance.

Canada’s “Pan-Canadian AI Strategy” prioritises responsible AI development for societal benefit, focusing on ethics, transparency, and accountability. Establishing governance in your organisation involves processes, policies, and practices for AI’s responsible development, deployment, and use.

Reach governance greatness in 14 Steps
Government and companies using AI must incorporate risk and bias checks in mandatory system audits. Alongside data security and forecasting, organisations can adopt strategic approaches to establish AI governance.

  • Development guidelines: Establish a regulatory framework and best practices for AI model development, including data sources, training, and evaluation techniques. Craft guidelines based on predictions, risks, and use cases.
  • Data management: Ensure that the data used to train and fine-tune AI models is accurate and compliant with privacy and regulatory requirements.
  • Bias mitigation: Incorporate ways to identify and address bias in AI models to ensure fair and equitable outcomes across different demographic groups.
  • Transparency: Require AI models to provide explanations for their decisions, especially in highly regulated sectors such as healthcare, finance and legal systems.
  • Model validation and testing: Conduct thorough validation and testing of AI models to ensure they perform as intended and meet quality benchmarks.
  • Monitoring: Continuously monitor AI model performance metrics, updating to meet changing needs and safety regulations. Due to generative AI’s novelty, maintain human oversight to validate quality and performance.
  • Version control: Keep track of the different versions of your AI models, along with their associated training data, configurations, and performance metrics so you can reproduce or scale them as needed.
  • Risk management: Implement security practices to protect AI models from cybersecurity attacks, data breaches and other security risks.
  • Documentation: Maintain documentation of the entire AI lifecycle, including data sources, testing, and training, hyperparameters and evaluation metrics.
  • Training and Awareness: Provide training to employees about AI ethics, responsible AI practices, and the potential societal impacts of AI technologies. Raise awareness about the importance of AI governance across the organisation.
  • Governance board: Establish a governance board or team overseeing AI model development, deployment and compliance with established guidelines that fit your goals. Crucially, involve all levels of the workforce — from leadership to employees working with AI — to ensure comprehensive and inclusive input.
  • Regular auditing: Conduct audits to assess AI model performance, algorithm regulation compliance and ethical adherence.
  • User feedback: Provide mechanisms for users and stakeholders to provide feedback on AI model behaviour and establish accountability measures in case of model errors or negative impacts.
  • Continuous improvement: Incorporate lessons learned from deploying AI models into the governance process to continuously improve the development and deployment practices.

AI governance demands continuous commitment from leadership, alignment with organisational values, and adaptability to technological and societal changes. A well-planned governance strategy is essential for organisations using automation, ensuring compliance.

Establishing safety regulations and governance policies is vital to maintaining the security, accuracy, and compliance of your data. These steps can help ensure your organisation develops and deploys AI responsibly and ethically.

Continue Reading

Copyright © 2021 Futures Parity.