why-are-ai-tools-raising-concerns-over-transparency

Why Are AI Tools Raising Concerns Over Transparency?

Back in March, two Google employees attempted to stop the company from launching an AI chatbot, saying it was far too prone to making inaccurate statements. Google disregarded these concerns, and a few weeks later released Bard.

It marked an aggressive move by a usually risk-averse company—spurred on by the race to control what many experts believe will be the defining innovation of this era of the tech industry: generative AI, the powerful new technology fuelling AI tools like Bard.

Google was already looking a little slow on the uptake in this race, given that it had now been three months since the release of ChatGPT, a chatbot developed by San Francisco startup OpenAI in partnership with Microsoft, and which now boasts no fewer than 100m monthly users.

The rapid and immense success of ChatGPT led to a culture of greater openness at Microsoft’s rival Google when it came to taking risks with their ethical guidelines, which had been honed over years of ensuring their technology didn’t cause problems with society.

Fast-forward to March, and tensions were rising between the industry’s naysayers and risk takers. More than 1,000 leaders and researchers, including Elon Musk and Apple cofounder Steve Wozniak, called for a six-month pause in the development of AI, stating in a public letter that the technology had begun presenting ‘profound risks to society and humanity.’
Is Google taking a risk releasing technology it doesn’t fully understand? Perhaps. Although despite growing concerns surrounding the development of AI, some Google insiders say they’ve already mitigated many problems by virtue of having built sophisticated systems to filter out inaccurate information.

It’s also worth noting the context surrounding Bard: the chatbot was released after years of internal dissent at Google over whether the societal benefits of generative AI outweighed its risks. Moreover, even many AI enthusiasts don’t know that Google had already developed Meena, a similar chatbot, back in 2020—but ultimately deemed it too risky to release. This certainly lends credence to Google’s nominal commitment to doing good by society.

screendragon-project-management

Following the release of ChatGPT, research executives at Google were told to fast-track AI projects, even though many felt this could compromise their longstanding safety standards. Some felt chatbots might produce false information, facilitate violence following mass online harassment, and even damage those users who became emotionally dependent on them. Nonetheless, the company went ahead and released Bard in what they called a ‘limited experiment’, insisting it was safe by virtue of the disclaimers surrounding its implementation and training required to use it.

Microsoft CEO Satya Nadella made a bet on generative AI back in 2019 when his company invested $1bn in OpenAI, its codevelopers of ChatGPT. In the meantime, Microsoft’s Office of Responsible AI set about writing policies for the development of the technology, but many insiders feel these weren’t consistently applied or followed. Furthermore, despite Microsoft’s own ‘principle of ‘transparency’, some ethics experts working on ChatGPT felt they were kept in the dark regarding what data OpenAI was using to develop its systems. And when Google announced its plan to integrate AI into its search engine following the success of Bard, insiders at that company came out criticising the idea because of chatbots’ tendency to disseminate misinformation.

Last autumn, Microsoft began breaking up Ethics and Society, one of its biggest technology ethics teams, which trained and consulted company product leaders to design and build responsibly. The few remaining members joined the daily meetings with the Bing team, who were racing to launch ChatGPT—and were shocked at the possibility of users becoming dependent on the tool and being misguided by its inaccurate guidance. These concerns were compounded by ChatGPT’s first-person intercourse and use of emojis, which the team worried would make many users believe they were interacting with a human.

Ethics and Society was finally laid off outright in March. And as Google and Microsoft continue releasing new products week on week, and vie to outdo one another with ever more frantic and ambitious plans to realise their respective leaders’ AI visions, Bard and ChatGPT are clearly only the beginning of an AI race whose middle and end are simply unknowable right now.

About Pixated

Pixated is a performance marketing and web design agency with a proven track record for scaling up some of the most exciting brands.

With a team of experienced in-house specialists at hand, Pixated is the go-to agency when it comes to crafting high-converting campaigns geared towards generating the best ROI for ambitious brands around the world, freeing them up to focus on what they are good at.