Google Bard First Impressions — Will It Outperform ChatGPT?

Google Bard First Impressions - Yesterday evening, I received an exciting email from Google—I was among the privileged few invited to be the first testers of Google Bard, the upcoming AI chatbot. Designed as a direct competitor to OpenAI's ChatGPT, which has taken the world by storm, Google Bard aims to revolutionize the chatbot landscape.

With its release, Google aims to secure its position in the search engine market, where Microsoft's Bing already integrates AI features powered by GPT-4. Now, let's delve into Bard's features, strengths, limitations, and whether it poses a threat to ChatGPT's dominance.

1. The Interface: Simplified for Users

Upon accessing Bard through my exclusive invitation, I was greeted with a clean and uncluttered interface—a hallmark of Google's products. The interface consists of a text window and a prompt input section, without the inclusion of a chat history tab found in ChatGPT.

While ChatGPT caters to a more functional and professional user base, Bard's interface appears tailored for individuals seeking a seamless search experience on Google. OpenAI likely considered ChatGPT as a minor enhancement to their existing models during its development, hence the less refined interface.

2. Lightning-Fast Speed

Bard boasts remarkable speed. To test its performance, I requested the system to "write 500 words about Bichon Frises." In just 9.5 seconds, Bard generated a substantial chunk of text comprising 329 words (Large Language Models struggle with accurate word counts). Comparatively, ChatGPT Plus, running on GPT-4, took 3 minutes and 2 seconds to produce a response of 428 words. Bard's speed advantage is impressive.

However, it remains to be seen if this advantage will persist as Bard's user base expands, placing a heavier load on its servers. Currently, Bard is available only to a select group of Beta testers, while ChatGPT is accessed by millions worldwide.

3. Exploring Bard's Capabilities

While Bard impresses with its interface and speed, its performance in generating content raises questions. Bard's interface comes with several disclaimers about its capabilities. Google emphasizes that "some responses may be inaccurate" and that Bard operates with substantial guardrails, including a lack of support for coding-related inquiries. It also features built-in safety controls to prevent malicious or harmful responses.

Consequently, Bard is more likely to decline requests beyond its capabilities compared to ChatGPT. In my testing, Bard rejected "jailbreaking" prompts, and it even refused to help compose a response to a sales email, unlike ChatGPT. This suggests either limitations in Bard's abilities or Google intentionally constraining its capabilities during the early testing phase.

4. Real World Test — Writing a Blog Post

Requesting Bard to write a blog post yielded mixed results. I prompted Bard to answer the question, "Are Bichon Frises Hypoallergenic?" The generated response lacked organization, flow, and valuable insights, resembling a summary of information collected from top-ranking Google search results. In contrast, ChatGPT crafted a well-structured and informative blog post, precisely addressing the query. ChatGPT also excelled in making updates to the post by incorporating relevant article links and formatting headings, tasks that Bard refused to undertake accurately.

5. Real World Test — Answering a Search Style Query

For my initial test, I decided to assess Bard's capabilities by posing a search-style query. Considering Google's dominance in the search space, I anticipated Bard to outperform its competitors in this regard. I presented Bard with the following prompt: "What are some fun things to do in Walnut Creek, California?"

Bard responded with a bulleted list. While the information provided was somewhat accurate, upon closer examination, I noticed several inaccuracies. For instance, Bard suggested taking a "bike ride through the Lindsay Wildlife Experience."

Now, it's important to note that Lindsay Wildlife is an indoor museum, and riding a bike inside would be highly inappropriate. This raised concerns about the accuracy of Bard's responses.

In contrast, ChatGPT's responses were much more specific and engaging. For the same query, ChatGPT provided the following response:

"The Ruth Bancroft Garden: A beautiful and unique 3-acre garden showcasing a diverse collection of succulents, cacti, and drought-tolerant plants from around the world. It's a must-see for plant enthusiasts and photographers."

As we can see, ChatGPT's response offers more specific details about the attraction, highlighting its uniqueness as a garden dedicated to drought-resistant native plants. This response goes beyond simply describing it as "beautiful" and mentioning the presence of various plants.

Surprisingly, ChatGPT outperformed Bard even in search-oriented queries.


Based on my initial testing, it appears that Bard has potential but currently lags behind ChatGPT in terms of overall capabilities. Bard's performance is reminiscent of ChatGPT's during the GPT-3 phase, where it generated somewhat accurate but lacking responses in terms of intent and human knowledge.

Bard also exhibits a more conservative and safety-conscious approach compared to ChatGPT, even in its initial iteration. While ChatGPT was always willing to attempt a response, Bard seems more inclined to say "no."

This cautious nature of Bard likely reflects Google's emphasis on safety and scientific excellence in AI innovation. While this level of rigor benefits society, it diminishes the usefulness of the tool for individuals seeking productivity enhancements.

From my testing, it's evident that Bard isn't aiming to compete with ChatGPT as a versatile tool for various applications. ChatGPT excels in a wide range of tasks, including poetry composition, software development, and data manipulation.

Bard, on the other hand, appears to be tailored specifically for search. Although it currently falls short in this area, with more training data and time, its responses will likely improve significantly.

Furthermore, Bard possesses certain advantages in the search space:

1. **Fast Response Times:** Bard's swift response times, assuming they remain consistent at scale, align well with search and voice query requirements. Waiting 10 seconds for AI search results is acceptable, but waiting 3+ minutes is not. Bard's speed is optimized for prompt search responses.

2. **Specialized Task Orientation:** Bard's design suggests a focus on search rather than general tasks. It often admits its limitations, responding with "Sorry, I can't do that." This implies that Google prioritizes the search use case and deliberately limits Bard's capabilities in other areas.

3. **User-Friendly Interface:** Bard's interface is clean and user-friendly. It could seamlessly integrate into Google's search engine as an additional tab, alongside features like Images and Scholar.

To summarize, my initial testing indicates that Google isn't striving to compete with OpenAI in creating a multipurpose model. Instead, they seem to concentrate on integrating generative AI into their highly lucrative search business.

Although Bard currently has significant limitations, recent advancements in generative AI models have demonstrated rapid improvement. As Bard evolves, it will undoubtedly generate more comprehensive and valuable responses.

If Google dedicates its time and resources to integrating these enhanced responses into its search engine, it has a high probability of dominating the search space. While I can't provide insights into Google's internal workings, based on my evaluation of Bard, it seems likely that they are willing to concede ground to OpenAI in areas like general productivity and coding, in exchange for focusing their efforts on refining Bard for search. Ultimately, Google aims to surpass both OpenAI and Bing in this specific and limited use case.

Post a Comment

Previous Post Next Post