AI regulations in global focus as EU approaches regulation deal

Regulation

The surge in generative AI development has prompted governments globally to rush toward regulating this emerging technology. The trend matches the European Union’s efforts to implement the world’s first set of comprehensive rules for artificial intelligence.

The artificial intelligence (AI) Act of the 27-nation bloc is recognized as an innovative set of regulations. After much delay, reports indicate that negotiators agreed on Dec. 7 to a set of controls for generative artificial intelligence tools such as OpenAI Inc.’s ChatGPT and Google’s Bard.

Concerns about potential misuse of the technology have also propelled the U.S., U.K., China, and international coalitions such as the Group of 7 countries to speed up their work toward regulating the swiftly advancing technology.

In June, the Australian government announced an eight-week consultation on whether any “high-risk” artificial intelligence tools should be banned. The consultation was extended until July 26. The government seeks input on strategies to endorse the “safe and responsible use of AI,” exploring options such as voluntary measures like ethical frameworks, the necessity for specific regulations, or a combination of both approaches.

Meanwhile, in temporary measures starting August 15, China has introduced regulations to oversee the generative AI industry, mandating that service providers undergo security assessments and obtain clearance before introducing AI products to the mass market. After obtaining government approvals, four Chinese technology companies, including Baidu Inc and SenseTime Group, unveiled their AI chatbots to the public on August 31.

Related: How generative AI allows one architect to reimagine ancient cities

According to a report, France’s privacy watchdog CNIL said in April it was investigating several complaints about ChatGPT after the chatbot was temporarily banned in Italy over a suspected breach of privacy rules, overlooking warnings from civil rights groups.

The Italian Data Protection Authority, a local privacy regulator, announced the launch of a “fact-finding” investigation on Nov. 22, in which it will look into the practice of data gathering to train AI algorithms. The inquiry seeks to confirm the implementation of suitable security measures on public and private websites to hinder the “web scraping” of personal data utilized for AI training by third parties.

The United States, the United Kingdom, Australia, and 15 other countries have recently released global guidelines to help protect artificial intelligence (AI) models from being tampered with, urging companies to make their models “secure by design.”

Magazine: Real AI use cases in crypto: Crypto-based AI markets, and AI financial analysis

Articles You May Like

Ethereum Consolidation Continues – Charts Signal Potential Breakout
Massive Ethereum Buying Spree – Taker Buy Volume hits $1.683B In One Hour
Analyst Reveals When The Ethereum Price Will Reach A New ATH, It’s Closer Than You Think
Last Chance To Buy Ethereum? Analyst Expects $6,000 Once It Breaks 8-Month Accumulation
Ethereum Sees Neutral Netflow On Binance: What Does This Signal?