Is artificial intelligence the next search engine?

by Larry Magid
This post first appeared in the Mercury News

I still remember the impact of Google, back around 1998, which fundamentally changed the way we search for information.

Even though I still use it multiple times a day, that quarter-century-old search engine is looking pretty anemic compared with ChatGPT from OpenAI and the “new Bing” from Microsoft, which uses technology from OpenAI. But Google isn’t sitting still. It’s launching its own AI chatbot, called Bard, which is currently available only to a limited number of people in the U.S. and U.K.

These chatbots work by analyzing very large language models from online books, articles, social media posts, blogs and pretty much anything they can find on the internet. With that vast amount of data at their disposal, they can answer questions or even carry out tasks, such as writing computer code or, soon, drawing pictures based on what the user asks for.

I’ve played around with ChatGPT, its close cousin, Microsoft Bing and Google Bard. “Played” is the operative word. I have mostly been having fun using these tools, but I’ve also used them in a couple of productive ways. It was fun asking ChatGPT to do things like write poems. I even asked it to write a poem about me, and it complied in ways that were both humorous and flattering.

As per being productive, I used ChatGPT to help me write a script for one of my weekly ConnectSafely segments for CBS News Radio, and the advice it gave was basic but spot on. For the record, I disclosed the use of this AI tool in my segment and don’t plan to make a habit of it. But it does illustrate the promise and the peril of using tools like this to create content for publication and broadcasting. In my case, it was a publicly disclosed experiment, but I wonder — and to some extent worry — about it being used to take the place of human creativity,

Kerry Gallagher
ConnectSafely Education Director Kerry Gallagher

Some educators worry about it being used by their students, and there’s been a lot of discussion and consternation about it, including bans in some school districts. But not all educators think it’s necessarily bad. Kerry Gallagher, assistant principal for teaching and learning at St. John’s Prep in Danvers, Massachusetts, and also the education director for my non-profit, ConnectSafely.org, said that she thinks “it’s totally appropriate for teachers to teach students how to use open AI responsibly and what its limits are and what its capabilities are so that they know how to use it to both assist in the learning process and also how they need to edit and improve on what it produces to make it really and truly their own work and not the work of someone else that they’re submitting.”

I asked her if she was worried about it being used by students trying to deceive teachers into thinking it’s their own work. She said it is a concern, but said she could tell immediately when one of her students turned in an assignment that was created using AI because it differed greatly from the student’s writing style and contained information that hadn’t yet been covered in class.

I think about other tools that have made life easier for students. During my youth I was horrible at math but — when I was in graduate school — I got my hands on a four function calculator and, later, access to a computer. What I discovered was that I was bad at arithmetic and not necessarily mathematics. The university mainframe enabled me to do the complex statistical analysis that was essential for my doctoral dissertation, which I could have never done if I were relying on pencil and paper or an old slide rule. I still have trouble with long division but, thanks to technology, it didn’t hold me back.

But unlike even basic four-function calculators, AI generated responses aren’t always correct. Computers will give you the wrong answer if you put in the wrong data, but AI sometimes makes things up, even if you ask the question correctly. There are numerous examples of mistakes made by ChatGPT. When I asked it about myself, it got it mostly right but also said I had written for the Wall Street Journal, which is not true. Google Bard erroneously said that I started my carer at the Associated Press.  Those are harmless errors, likely triggered because I’ve written for newspapers, but the mere fact that it’s wrong is troubling. I’d hate that to be the case if it were information you needed to rely on to make an important decision.

Dr. Daniel Rengstorff

Speaking of decisions, I would never rely on AI or even Google for medical advice about a serious condition, but as an experiment, I did ask ChatGPT what to do if I experience abdominal pain and forwarded its response to Dr. Daniel Rengstorff, a Redwood City-based gastroenterologist who agrees that it gave “a very reasonable answer and to some degree it is how we think as doctors when assessing a patient.”  But he added, “The part of medicine that is hard for a computer to replicate is that general intuition one develops after years of practice.” He said that “ChatGPT can be used as a valuable resource for patients, but I am not ready to hand over the reins just yet.”

Personally, I would never take medicine or any other action based on online research without first consulting a health professional because, among other reasons, I want some perspective. Many symptoms, for example, can be signs of life-threatening diseases or, more likely, something minor, and doctors are a lot better than computers at figuring that out.

The bottom line about AI is that it’s here to stay. Even social media apps like Snapchat are using a version of OpenAI, and Meta CEO Mark Zuckerberg posted “We’re creating a new top-level product group at Meta focused on generative AI to turbocharge our work in this area.”

We can also expect to see and hear more sophisticated AI being used on websites and even on the phone, helping people get answers to questions that might otherwise require human interaction. In the near term, these AI bots will get things wrong, fail to answer some of our questions and no-doubt frustrate people. But, over time, they will get quite good at what they do.

In the meantime, we need to use our critical-thinking skills as we consume AI generated information. It can be useful, but it has a long way to go before it’s fully reliable.

Larry Magid is a tech journalist and internet safety activist.