AI has moved from the stuff of science fiction to a fixture in our day-to-day lives. To better understand how this technology is shaping the modern world, Robotics & Automation Magazine takes a deep dive into the major regulatory and technological advancements in artificial intelligence in recent months…
Since the arrival of publicly available generative AI in the form of ChatGPT in November 2022, the technology’s capabilities and visibility have both increased dramatically. A report from multinational professional services company EY, released in September 2023, presented an analysis of the trends in global attempts to regulate AI. The findings were based on the approaches of eight jurisdictions (Canada, China, the EU, Japan, Korea, Singapore, the UK and the USA) to implement AI-centric policies. According to the report, a recurring trend was the adoption of a preventative approach to policy, with an overarching focus on respect for human rights, sustainability, transparency and strong risk management becoming clear.
Notable policy changes made in the past 12 months to reflect this include new international covenants established to prevent the use of AI for harmful practices. However, many of these key regulatory, ethical and technological changes have been implemented reactively, as AI – especially generative AI systems – have advanced in scale and capability with unprecedented rapidity. By making such systems available to the general public, the scale of risk associated with these technologies has escalated significantly. Some of these include the rise of scams and phishing, deepfakes and potential job losses, as well as more insidious threats coming from reduced transparency and data privacy. Concerns have emerged around the use of AI to inform predictive algorithms to map potential crimes in policing, or the use of AI-enhanced facial recognition technology, among many others.
However, along with all this doom and gloom, are stories of how automation is being used to save time and money and reduce incidences of error in public services. What’s more, sectors such as climate research and financial technology are availing of AI’s ability to process, manage and model large quantities of data to predict market changes or map weather patterns. This, in turn, could support improvements in preventative action against climate change and natural disasters.
AI is no longer reserved for industry experts or those working in tech and has instead become an enduring focus of everyday headlines and public discourse. Since the technology became available to the wider public, Google searches of the phrase “artificial intelligence” have increased by 350% year-on-year in the UK, according to internal data from the company. It no longer seems plausible for organisations to avoid use or discussion of AI – but, considering its state of constant evolution, what are the key factors to consider for those looking to invest in AI? To help provide some clarity, Robotics & Automation Magazine has outlined some of the major developments concerning AI from the past 12 months…
Legislation
European Parliament votes in favour of ‘world’s first’ AI act
In June last year, members of the European Parliament (MEPs), the European Union’s main legislative body, voted on a bill to regulate the use of AI in EU member states. MEPs voted with 499 in favour, 28 against and 93 abstentions ahead of talks with EU member states on the final draft of the legislation. If ratified by member states, the act could become law by 2025. One key focus of the legislation was the banning of harmful uses of AI systems.
UK-hosted summit leads to ‘world first’ AI safety pact
In November 2023, leading nations involved in the development of AI and its associated technologies convened in the UK as part of the first global summit on AI safety. Named the ‘Bletchley Declaration on AI safety’, the pact saw 28 countries from across the globe, including Africa, the Middle East and Asia, as well as the EU, agree ‘to the urgent need to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community’, according to the UK government. It is hoped this framework will enable international collaboration to support the mitigation of risks associated with rapidly developing AI.
European Commission to publish AI-powered robotics strategy
Last month [January 2023], the European Commission announced plans to publish an EU-wide strategy paper next year to ensure coordination across the continent in the uptake of robotics powered by AI. The strategy will address all aspects of development and robotics across the 27 member states of the EU and is intended to ensure that Europe stays an important player in the field. It will be linked to other relevant commission plans, such as the AI in workplace initiative and the AI Act.
Circulation of Taylor Swift deepfakes leads to US bill
The rapid spread of deepfake pornographic images of Taylor Swift via social media last month led to renewed calls for legislation criminalising the practice, which prompted the presentation of a bill from a bipartisan selection of US senators. The group announced a bill late last month that would make the spread of nonconsensual, sexualised images generated by AI a criminal act. This would entitle the subject/s of nonconsensual, sexualised or nude imagery to pursue a civil penalty against both those who produced and distributed the material.
UK government publishes AI whitepaper
In March 2023, the UK Government announced a ‘pro-innovation’ approach to AI regulation through the publication of a whitepaper, which largely regulated AI via existing laws and regulators. The document outlined cross-sectoral principles, such as safety, security, robustness, transparency, fairness, accountability, contestability and redress, for existing regulators to consider. The approach applied to the whole of the UK, although some policy areas were devolved.
Panel of experts working on international AI safety report revealed
Academics and leading industry figures have been announced as part of the expert panel working on a major new report on AI safety. The report was announced by British prime minister Rishi Sunak last November at the Bletchley Park AI summit. Some 32 experts have been named as part of the advisory panel, including representatives from countries in attendance at the summit, plus the EU and the UN.
Tech advancements
Google claims ‘major breakthrough’ as AI solves geometry problems
Last month, Google Deepmind said that its new AI system has made a ‘major breakthrough’ by solving geometry problems with the same skill as top-level high school students. Geometry can be challenging for AI systems due to a lack of available data to help with geometric problem-solving. Some 30 problems were presented to the system at the International Mathematical Olympiad, a competition in which high-performing school students attempt to prove mathematical theorems. The AI was able to solve 25 of those assigned to it. Dubbed ‘AlphaGeometry’, Google has developed a new methodology to build and teach the system. This works by using a language model that can self-teach by processing millions of theorems and proofs, which was then combined with a system designed to identify branching points in similar problems.
UK
AI roles comprise almost 30% of UK tech job ads
Research from Thomson Reuters has revealed that almost 30% (27%) of job adverts in the UK’s tech sector are AI-centric roles. Generative AI systems like ChatGPT have led to heightened interest and investment into AI by tech firms, with the recruitment of new staff being one means of staying competitive amidst the ‘AI boom’. The group analysed more than 6,000 job openings in the UK tech industry, with additional research revealing that 91% of tech executives in the US, UK and Canada now use generative AI or are making preparations to do so.
Google report suggests AI could ‘grow UK economy by £400bn’
In July last year, tech giant Google published a report on the rise of AI, naming it the “most profound” technological shift of our lifetime and stating that it has the potential to boost the UK’s economy by £400bn by 2030. The report suggested that it could reverse the UK’s downturn in growth seen in recent years, and in fact enable an annual growth increase of 2.6%. Other predictions in the report were that, though some jobs “will be lost”, more will be created through widespread adoption of the technology and that upskilling workers was key to ensuring the benefits of AI are unlocked in the UK.
Twice as many AI companies in Britain than any European country, whitepaper shows
In March 2023, the UK government published a whitepaper on the AI industry, which showed that there were twice as many AI companies registered in Britain than any other European country. Other headlines of the report include that the sector employed more than 50,000 people and that it contributed £3.7bn to the economy in 2022.
Big tech
UN head denounces big tech’s ‘reckless’ pursuit of AI profits
At the World Economic Forum meeting in Davos, Switzerland, UN secretary general António Guterres denounced big technology companies ‘reckless’ pursuit of profits from the use of AI. He urged action against the risks of these organisations’ use of AI and that every perceived breakthrough in generative AI accordingly led to the creation of new and unexpected risks.
US trade watchdog launches inquiry into big tech AI deals
The United States trade regulator, the Federal Trade Commission (FTC), has launched an inquiry into recent deals and investments made my major multinational tech companies into generative AI. In a statement, the FTC ordered five companies to provide information on recent transactions, with these names including Alphabet (Google’s parent company), Amazon, Anthropic, Microsoft and OpenAI, the creator of ChatGPT. The central motivation of the inquiry is to determine the nature of investment into AI start-ups and how harmful these deals could be to market competition.
Major tech firms to challenge Nvidia by developing own AI chips
Chip manufacturer Nvidia, which currently accounts for between 70 and 90% of AI chip sales, has been challenged by investments from major technology companies looking to make AI chips in-house. Key players such as Amazon, Google, Meta and Microsoft have made substantial investments in chip development in response to global shortages and Nvidia’s market domination. So far, Google has spent between US$2-3bn (£1.5-2.3bn), while Amazon has committed US$200m. However, analysts believe most rivals are years away from catching up to the chip giant. Patrick Moorhead, CEO of Moor Insights & Strategy, said: “Nvidia, to its credit, started about 15 years ago working with universities to find novel things that you could do with GPUs, aside from gaming and visualisation. What Nvidia does is they help create markets and that puts competitors in a very tough situation out there, because by the time they’ve caught up, Nvidia is on to the next new thing.”
Industry voices
‘Godfather of AI’ quits Google to voice concerns about rapid development
Geoffrey Hinton, a British-Canadian computer scientist who is hailed as the ‘godfather of AI’ due to his work on artificial neural networks and deep learning, left his job at Google in May last year to speak openly about the dangers of rapidly-advancing AI. Hinton had worked part-time for tech giant Google since March 2013 when his company, DNNresearch Inc., was acquired. In an interview, Hinton said “I console myself with the normal excuse: If I hadn’t done it, somebody else would have…It is hard to see how you can prevent the bad actors from using it for bad things…The idea that this stuff could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
BCS open letter claims AI is ‘not an existential threat to humanity’
BCS, the Chartered Institute for IT, penned an open letter in July 2023 calling for AI to be recognised as ‘a transformational force for good not an existential threat to humanity’. It has since received more than 1,300 signatures from industry figures, with prominent signatories including CEO of Stemettes Dr Anne-Marie Imafidon, Professor Luciano Floridi from the University of Oxford and the University of Bologna and Dr Jacqui Taylor, CEO and co-founder of FlyingBinary and the UK smart city tsar.
Letter signed by Elon Musk urges pause on AI research
An open letter signed by Tesla CEO Elon Musk has amassed thousands of signatures and calls for a pause in research into AI. The aim of the letter was to establish a six-month pause on the development of increasingly powerful generative AI systems. Aside from Musk, signatories included cognitive scientist Gary Marcus and co-founder of Apple Steve Wozniak, as well as engineers from Amazon, DeepMind, Google, Meta and Microsoft.
Nobel Prize winner encourages youth to pursue creative careers amid rise of AI
Amid growing interest and reliance on AI, especially generative AI systems like ChatGPT, Nobel Prize winner and economics professor at the London School of Economics Christopher Pissarides has urged young people considering their career paths to place value in creative and other “empathetic” skills and vocations. The labour market economist sees there being a demand for skills outside of science, technology, engineering and mathematics (STEM), with creativity potentially thriving in a world dominated by AI. He added that certain IT workers jobs risk sowing their “own seeds of self-destruction” by developing AI at such a rapid pace that it will replace their own roles in the future.
This article first appeared in the March 2024 issue of Robotics & Automation Magazine. Read it in print.