Japan’s business-first AI strategy aims to attract global investment, raising creative industry concerns

Written by

Published 12 Aug 2024

Fact checked by

NSFW AI Why trust Greenbot

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

Free People Walking On The Streets Surrounded By Buildings Stock Photo

Japan is taking a business-friendly approach to artificial intelligence (AI) regulation in an effort to attract global investment, stirring criticism and fear in the creative industry.

The Japanese government’s AI Strategic Council is set to lead the country’s artificial intelligence policy discussions, focusing on industry-led oversight rather than strict government regulation. The government hopes to attract potential investors in the AI industry to retain the country’s fading technological lead.

This strategy contrasts sharply with the more stringent regulatory frameworks being implemented in other regions, particularly the European Union (EU), which recently achieved a milestone with the enforcement of the world’s first AI law.

“Japan is not thinking of implementing strict regulation, and there will be as few regulations as possible,” said Masaaki Taira, chairman of the ruling Liberal Democratic Party’s (LDP) project team on AI. “We are asked that question from overseas tech companies quite often.”

It seems like this approach is starting to positively affect the Japanese government and economy. OpenAI, the company behind ChatGPT, recently established its first Asian office in Tokyo, while Microsoft, Google, and Amazon have all committed to multi-year investments in the country. Amazon has already pledged 2.26 trillion yen ($14 billion) to Japan’s cloud service sector by 2027.

The Creative Industry’s Unease

However, this hands-off approach is not without its critics. Japan has long been the driving source of creative products such as manga and anime. It’s not surprising that people whose livelihood mainly draws from art, in any shape or form, will protest against the possibility of their works being stolen due to AI.

AI technology has shown itself capable of mimicking popular artist’s styles. A recent survey conducted by Arts Workers Japan revealed that 92% of illustrators fear their work is being used to train AI tools without their consent. Additionally, 60% of respondents expressed concern about potential job losses due to AI advancements.

In response to growing concerns, the Agency for Cultural Affairs has taken steps to address the situation. Its subcommittee reviewed AI copyright issues, with nearly 25,000 public comments received. The agency has also offered free legal consultations to creators and is conducting outreach to better understand specific problems faced by artists.

Despite these efforts, many in the creative community feel the government is not moving quickly enough to protect their interests. They argue that while tech companies rapidly develop AI models trained on vast amounts of data, artists are left vulnerable to potential exploitation.

Calls for Greater Oversight and Transparency

The Japanese government remains committed to its pro-innovation stance. They hope that by creating a favorable environment for AI development, Japan can address its “digital deficit” and reduce its reliance on foreign tech giants for advanced AI services.

“Japan has no OpenAI or Google or Amazon, and it is also facing population decline,” said Taira. He then argued that the country’s shortage of workers makes it more amenable to widespread AI implementation.

Critics of Japan’s current approach argue that the government should demand greater transparency from tech companies regarding the data used to train their AI models. They also call for clear guidelines for compensating artists who are utilized in AI training.

On the other hand, the government is not entirely ignoring the need for oversight. The AI Strategic Council is considering “soft regulation” measures that could be implemented quickly, aiming to prevent potential public backlash that might lead to calls for more stringent controls.