BlogArtificial IntelligenceGoogle BardGoogle’s Bard is boring because it’s responsible AI

Google’s Bard is boring because it’s responsible AI

Google Bard Is Responsible AI - Search Engine Hubbub
Share the insight:

It’s fair to say that since its launch, Google’s Bard hasn’t exactly ‘wowed’ users with rave reviews. Compared to OpenAI’s ChatGPT, the current experimental Google Bard is not delivering the same level of excitement. But this is intentional. With so much at stake, Google can’t afford to get it wrong.

Priding itself on being an AI-first company, (its Transformer created the foundations for OpenAI’s ChatGPT), Google now faces a choice: hurry to create something that truly competes in the short term, or take it slow. Google looks to have taken the latter approach, opting for responsible progression of Artificial Intelligence. In the long run, it could win the race. 

Why did Google release this experimental version of Bard with an immature version of LaMDA?

The excitement around OpenAI's ChatGPT forced Google's hand

In a recent interview with the New York Times, Sundar Pichai (Google’s CEO) dismissed the rumours of a ‘Code Red’. While it’s true that Google has expedited its plans to launch a generative AI interface for users (Bard), Pichai contends that the company has been integrating AI into their products for years. The shift lies in the accelerated pace of change, driven by the rapid advancements of OpenAI and Microsoft’s partnership. This has led Google to release a product that can compete on a somewhat similar footing, which is better to not having a contender at all. Had Google not introduced Bard, questions would have arisen regarding their capabilities in this domain, even if they remained confident that their approach to AI within search was better.

Challengers appear all the time, but interactions with Large Language Models (LLMs) and Generative AI are different. It has democratised access to Artificial Intelligence and made search exciting again!

To date, most of Google’s AI is kept in a black box (or at least to a very select few). Generative AI has democratised access to this through an easy-to-use user interface. You might wonder how different this is from the usual search experience – you know, type a question and get an answer. But let’s face it, that experience has been a bit of a mess for a while now. We’ve had to deal with cluttered search results and jumping between sites just to find the answers we’re looking for. It’s far from the smooth experience.

Google has made some amazing progress over the years with AI in Search, especially when it comes to Augmented Reality. But this new wave of conversational AI could take the search experience to a whole new level. We’re talking about having a powerful assistant to help us find answers and even boost our creativity. How cool is that?

Were Google caught sleeping at the AI wheel with OpenAI?

It looks like they had an eye on what OpenAI was doing, but it obviously didn’t believe it would work as well as it has. Would Google have released the current version of Bard without ChatGPT? Probably not. It reluctantly opened up LaMDA with Bard through a user interface that it believes could keep its investors and users at bay. 

Google is clearly convinced its propositions are much stronger than the “gimmicks” created by OpenAI and Microsoft in the new Bing. They made this very clear at the conference in Paris earlier this year. 

Ultimately, Google will always adapt to make the search experience more natural and relevant to the user. They might have just been a little slow to this particular party. They have now created new swimlanes to streamline blockers they had within the organisation so that they can continue to compete in this space at a quicker speed. 

OpenAI's partnership with Microsoft is a threat to Google's search business

Users are already adapting how they search, I know I have certainly changed the way I prompt search engines to get more relevant results. Time will tell if this becomes the new search experience, and Google will follow the user journey. Google have always adapted their search engine to follow user’s changing behaviours. Take video for example, they now incorporate this everywhere in the search results page. It’s too much of a risk to not adapt. At least 58% of Alphabet’s revenue is from Search and it wouldn’t do anything to put this at risk.

Top Tip

Using Broad Match in your Search campaigns can help you capture the longer tail search queries from adapting search behaviours. 

Does the more responsible approach win the AI race?

Bing Image Creator_Google Bard_Search Engine Hubbub
Credit: Bing Image Creator, powered by Dall-E with OpenAI

Large Language Models (LLMs) need time, and multiple versions, to be properly trained. Instead of launching something much more powerful (with a bigger potential to get things wrong), Google went with an immature version of LaDMA which could benefit them in the longer term.

In OpenAI’s opinion, opening ChatGPT to the world at this stage (while the stakes remain relatively low) enables the models to learn, as well as allowing society to shape their development. Moreover, it provides an opportunity for society to gain insight into the advantages and drawbacks of AI, fostering a period of adaptation. OpenAI has acknowledged that previous models could have been steered towards negative outcomes. However, GPT-4 represents an advanced iteration, having incorporated these lessons, and is now less susceptible to being led into negative outcomes. The model is now capable of intervening at the prompt stage to reject potentially harmful guidance or direction. By launching in the way the did, OpenAI have learned at pace; but they have openly said this didn’t come without it’s challenges. 

Google is very open about the fact Bard is basic and experimental. The last thing Google want to do is scare society even more, so it was a smart move to be more cautious. In 2022, a Google engineer Blake Lemoine was fired to openly stating that Google’s LaMDA was sentient. They’ll certainly want to better manage the roll out of the model to a wider public that is already fearful of advanced Artificial Intelligence. 

This more considered approach to AI could help Google come out on top of the ethical battle with AI.

In a recent open letter, numerous prominent figures have called for a pause in the development of large-scale AI systems more powerful than GPT-4. Google’s CEO was conspicuously absent from this letter, and understandably so. Google has been tactful in their messaging, attempting to portray their rivals as the primary source of concern surrounding AI, while presenting themselves as the responsible party in this conflict.

Google is taking this stance by openly advocating for regulation in the AI field, as well as concentrating their efforts on creating models that prioritize user privacy, transparency, and ethics at their core. The need for proper regulation is simply too crucial to ignore.

Indeed, Google’s past experiences here have been far from smooth. Having faced numerous fines from competitors and governments, the company must proceed with caution. This responsible approach may prove advantageous in the long run as society grapples with the appropriate integration of these AI models into the future.

What is clear is that AI races can lead to harm, and there is a need to be responsible. Whilst their recent actions reaffirms what Google is telling us, couldn’t this equally be convienient for them?

If it successfully slows down the pace of change for everyone, it can catch up.

Not only does Google possess the potential to regain lost ground, but it also stands a strong chance of emerging as a level-headed player in an era rife with commotion surrounding the potential of AI. Google’s extensive experience in managing large language models and AI in Search grants them a significant advantage over newcomers like OpenAI and the new Bing with Microsoft, further solidifying their reputation as a more reliable and trustworthy entity. 

It's a difficult balance though, Google can't be too slow.

Perhaps they have already lost too much ground. OpenAI is already on GPT-4 whilst Bard is still live with a fairly basic version of LaDMA (although we do expect this to significantly improve in the coming weeks). It will be a challenge for Google to know when to push for the sprint, or to hold back to win the marathon.

In conclusion

The race for AI is on, and the stakes are high. Google, Microsoft, and OpenAI are all vying to develop the most powerful AI systems. However, there is a growing concern that the pace of AI development is outpacing our ability to understand and control it.

While Google’s Bard may seem comparatively mundane due to the company’s responsible approach, it is essential to recognize that this measured strategy is aimed at ensuring user privacy, transparency, and ethical considerations remain at the forefront. In a world where the consequences of AI development can have far-reaching implications, perhaps embracing responsibility is the key to unlocking AI’s true potential while maintaining a better balance with the wider society. As we navigate this rapidly evolving landscape, it is crucial to remain vigilant and prioritize the responsible development and deployment of AI, ultimately benefiting all.

Follow Me