5 ways ChatGPT could shape enterprise search in 2023

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


It’s been an exciting few months since OpenAI released ChatGPT, which now has everyone talking about it, many talking to it and all eyes on what’s next.

It’s not surprising. ChatGPT raised the bar for what computers are capable of and is a window into what’s possible with AI. And with tech giants Microsoft, Google and now Meta joining the race, we should all buckle up for an exciting but potentially bumpy ride.

Core to these capabilities are large language models (LLMs) — specifically, a particular generative LLM that makes ChatGPT possible. LLMs are not new, but the rate of innovation, capabilities and scope are evolving and accelerating at mind-blowing speed. 

A peek behind the AI curtain

There’s also a lot going on “behind the curtain” that has led to confusion, and some have mistakenly characterized ChatGPT as a Google killer, or that generative AI will replace search. Quite the contrary.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 

Register Now

First, it’s important to distinguish between search and generative AI. The purpose of search is information retrieval: Surfacing something that already exists. Generative AI and applications like ChatGPT are generative, creating something new based on what the LLM has been trained on. 

ChatGPT feels a bit like search because you engage with it through conversational questions in natural language and it responds with well-written prose and a very confident answer. But unlike search, ChatGPT is not retrieving information or content; instead, it creates an imperfect reflection of the material it already knows (what it has been trained on). It really is nothing more than a mishmash of words created based on probabilities. 

While LLMs won’t replace search, they can complement a search experience. The real power of applying generative LLMs to search is convenience: To summarize the results into a concise, easy-to-read format. Bundling generative LLMs with search will open the door for new possibilities.

Search a proving ground for AI and LLMs

Generative models based on LLMs are here to stay and will revolutionize how we do many things. Today’s low-hanging fruit is synthesis — compiling lists and writing summaries for common topics. Most of those capabilities are not categorized as search. But the search experience will be transformed and splintered with specialized LLMs that serve specific needs. 

So, amid the excitement of generative AI, LLMs and ChatGPT, there’s one prevailing point: Search will be a proving ground for AI and LLMs. This is especially true with enterprise search. Unlike B2C applications, B2B and in-business applications will have a much lower tolerance for inaccuracy and a much higher need for the protection of proprietary information. The adoption of generative AI in enterprise search will lag that of internet search and will require creative approaches to meet the special challenges of business.  

To that end, what does 2023 hold for enterprise search? Here are five themes that shape the future of enterprise search in the year ahead.  

LLMs enhance the search experience

Until recently, applying LLMs to search was a costly and cumbersome affair. That changed last year when the first companies started incorporating LLMs into enterprise search. This produced the first major leap forward in search technology in decades, resulting in search that is faster, more focused and more forgiving. Yet we’re only at the beginning.

As better LLMs become available, and as existing LLMs are fine-tuned to accomplish specific tasks, this year we can expect a rapid improvement in the power and ability of these models. No longer will it be about finding a document; we’ll be able to find a specific answer within a document. No longer will we be required to use just the right word, but information will be retrieved based on meaning.

LLMs will do a better job surfacing the most relevant content, bringing us more focused results, and will do so in natural language. And generative LLMs hold promise for synthesizing search results into easily digestible and readily understood summaries.

Search helps fight knowledge loss

Organizational knowledge loss is one of the most serious yet underreported issues facing businesses today. High employee turnover, whether from voluntary attrition, layoffs, M&A restructuring or downsizing often leaves knowledge stranded on information islands. This, combined with the shift to remote and hybrid work, dramatic changes in customer and employee perceptions and an explosion of unstructured data and digital content, has put immense strain on knowledge management. 

In a recent survey of 1,000 IT managers at large enterprises, 67% said they were concerned by the loss of knowledge and expertise when people leave the company. And that cost of knowledge loss and inefficient knowledge sharing is steep. IDC estimates that Fortune 500 companies lose roughly $31.5 billion a year by failing to share knowledge — an alarming figure, particularly in today’s uncertain economy. Improving information search and retrieval tools for a Fortune 500 company with 4,000 employees would save roughly $2 million monthly in lost productivity.

Intelligent enterprise search prevents information islands and enables organizations to easily find, surface, and share information and their corporate knowledge of their best employees. Finding knowledge and expertise within the digital workplace should be seamless and effortless. The right enterprise search platform helps connect workers to knowledge and expertise, and even connects disparate information silos to facilitate discovery, innovation and productivity.

Search solves application splintering and digital friction

Employees today are drowning in tools. According to a recent study by Forrester, organizations use an average 367 different software tools, creating data silos and disrupting processes between teams. As a result, employees spend 25% of their time searching for information instead of focusing on their jobs. 

Not only does this directly impact employee productivity, it has implications for revenue and customer outcomes. This “app splintering” exacerbates information silos and creates digital friction through constant app switching, moving from one tool to another to get work done.

According to a recent Gartner survey, 44% of users made a wrong decision because they were unaware of information that could have helped, and 43% of users reported failing to notice important information because it got lost amid too many apps.

Intelligent enterprise search unifies employees’ experiences so they can access all corporate knowledge seamlessly and accurately from a single interface. This greatly reduces app switching, as well as frustration for an already fatigued workforce, while streamlining productivity and collaboration.

Search gets more relevant

How often do you find what you’re looking for when you search for something in your organization? Fully one-third of employees report that they “never find” the information they’re looking for, always or most of the time. What are they doing, then? Guessing? Making it up? Charging forward in ignorance?

Search relevance is the secret sauce that enables scientists, engineers, decision-makers, knowledge workers and others to discover the knowledge, expertise and insights needed to make informed decisions and do more, faster. It measures how closely the results of a search relate to the user’s query.

Results that better match what the user hopes to find are more relevant and should appear higher on the results page. But many enterprise search platforms today lack the ability to understand the user’s intent and deliver relevant search results. Why? Because developing and tuning it is hard. So, we live with the consequences.

Intelligent enterprise search tools do much better, with results that are much more relevant than in-app search. But even they can struggle to handle hard scenarios, and the desired results may not be at the top of the list. But the advent of LLMs has opened the door for vector search, retrieving information based on meaning.

Advances in neural search capabilities incorporate LLM technology into deep neural networks: Models that incorporate context to provide excellent relevance through semantic search. Better yet, combining semantic and vector search approaches with statistical keyword search capabilities delivers relevance in a wide range of enterprise scenarios. Neural search brings the first step change to relevance in decades so that computers can learn how to work with humans rather than the other way around.

Question-answering methods get a neural boost

Have you ever wished your company had search that worked like Google? Where you could get an answer right away, rather than first locating the right document, then finding the right section, then scanning paragraphs to find the information nugget you needed? For simple questions, wouldn’t it be nice to just get a direct answer?

With LLMs and the ability to work semantically (based on meaning), the question-answering (QA) capability is available in the enterprise. Neural search is giving QA a boost: Users can extract answers to straightforward questions when those answers are present in the search corpus. This shortens the time to insight, allowing an employee to get a quick answer and continue their work flow without getting sidetracked on a lengthy information quest.

In this way, question-answering capabilities will expand the usefulness and value of intelligent enterprise search, making it easier than ever for employees to find what they need. QA applied to the enterprise is still in its infancy, but the technology is moving fast; we will see more adoption of various AI technologies that will be able to answer questions, find similar documents and do other things that shorten the time to knowledge and make it easier than ever for employees to focus on their work.

Looking ahead

Innovation relies on knowledge and its connections. These come from the ability to interact with content and with each other, derive meaning from those interactions and create new value. Enterprise search facilitates these connections across information silos and is therefore a key enabler of innovation.

Thanks to advances in AI such as neural networks and LLMs, enterprise search is entering a whole new realm of accuracy and ability.

Jeff Evernham is VP of product strategy at enterprise search provider Sinequa.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *