New Transformative Features for Enterprise Generative AI
The latest release of ChatAible will further drive acceleration and strengthen guardrails for generative AI responses, paving the way for business user led innovation in the enterprise.
1-GenAI_Dashboard-Nov-01-2023-04-56-59-5048-AM

Generative AI Dashboards

Aible GenAI Dashboards combine Natural Language Query (NLQ) with dashboards to address the inflexibility issues with traditional dashboards. Users can easily start with a familiar dashboard with multiple examples of NLQ questions and responses and then can just copy and edit those questions to ask their own. Aible also addresses the key problem of NLQ solutions – what questions should we ask? Knowing the right question to ask is the biggest part of the problem.

Read More
The Aible GenAI dashboard starts the user off with several good questions to riff from. The starting questions can be selected manually by central analytics/IT teams, based on popular questions from peers, or automatically recommended by Aible analysis.
2-Caching-for-Natural-Language-Queries-(NLQ)

Hallucination checks for Natural Language Queries (NLQ)

Aible addresses a key problem with GenAI NLQ – making sure it is answering the right questions accurately. Other solutions use a single model to translate the users’ questions into Structured Query Language (SQL) and then execute that SQL to produce the answer. They very helpfully provide the SQL to the user so that they can check that the SQL looks correct.

Read More
The problem is that most business users don’t know how to read SQL and would have no way to know whether it is correct. Given high hallucination rates for the GenAI models, there is a significant chance that the model misunderstands the question and translates it into the wrong answer. As long as the answer is plausible, the end user has no way to check on the answer short of learning SQL. Aible actually uses two models – one to understand the user’s question and answer it exactly, and the other to generate a SQL query that is slightly broader than the user’s question and includes information that provides additional context to the user to enable them to do a gut-check of the response. In our user testing we found that, for example, if a user asks for revenue from Germany that year and the AI gives a single number answer, it is hard for the user to detect errors. However, if the AI gives the answer but also includes the revenue numbers for Germany for a few more years for context, or includes the revenue numbers from other countries for context, the user is able to spot AI errors much more effectively. In essence, context is key in helping business users detect GenAI hallucinations.
10_LLM_Cache_Settings

Caching for Natural Language Queries (NLQ)

In our analysis of NLQ questions at larger organizations, we found that there was significant overlap in the user questions. Once we looked at the underlying themes of the questions – because there are many ways to ask essentially the same question – there was even more overlap. In traditional approaches, when such redundant questions are asked, you incur the GenAI model response cost as well as the cost of querying the underlying data.

Read More
Aible however detects that essentially the same question has been asked recently and responds from a cache without incurring the unnecessary cost. The average response time is also much faster as a result. Of course, users can always turn off the caching response if they want more up to date answers. Others then benefit from the more up to date answers when they ask similar questions.
3-What’s-Changed

What’s Changed Analysis

Business users regularly want to understand how and why their business has changed between two time periods - last month to this month, last year to this year, etc. Now Aible automatically performs What’s Changed Analysis to swiftly pinpoint and analyze the cause of significant changes in Key Performance Indicators (KPIs) and presents them via a simple chat interface.

Read More
Aible automatically detects shifts in behavior and frequency of millions of variables to detect why the KPI has shifted during time periods and explains the insights in a narrative customizable for individual business users. Users can then ask follow-on questions to explore the insights further.
4-Few-Shot-Learning-1

Automated Few Shot Learning

User feedback is extremely important for improving GenAI models. Unfortunately most business users are not used to providing feedback and can offer contradictory and even misleading feedback. Data Scientists often incorporate thousands of feedback over months to retrain the models and thus can’t trace the results of the retrained model back to individual user feedback. Thus, there is no immediate feedback loop in generative AI today.

Read More
Aible immediately leverages an end user’s feedback to improve the model for them via a technique called Few Shot Learning. The user can immediately see the impact of their feedback and adjust the feedback as appropriate to get better results. They can also immediately share their Few Shot Learning improvements with other users. Finally, experts can eventually aggregate the best performing feedback across users to retrain the model for all users.
5-Unstructured-Data-Hallucination-Double-Check

Unstructured Data Hallucination Double Check

Aible’s “If It’s Blue, It’s True” automated hallucination double-checking for structured data has consistently been one of our most popular genAI features. We now brought the same capability to unstructured data. Aible automatically parses the output of the genAI to detect which sections were based on enterprise documents and which were ‘made up’ by the GenAI.

Read More
Users can simply tooltip over the highlighted sections to see exactly where the GenAI got the relevant information from.
6-Blended-datasets

Blended datasets - HTMLS/PDF etc.

Aible can now be leveraged for GenAI use cases that span insights in multiple unstructured document sets in different formats. Multiple documents of different types - for example PDF, CSV, HTML, URL, Markdown and more - can be included in the same document set. Users can then ask questions that span all of the disparate document types.

Read More
Aible automatically detects the document snippets most relevant to their question whatever the type of the original document - and answers the question based on the most relevant content.
9_VectorDB

Vector DB Settings per Dataset

VectorDBs are a key technology for most unstructured GenAI use cases. Based on an user’s question, a VectorDB helps retrieve the most relevant document snippets that can be used by the GenAI to answer the user’s question. The problem is that most VectorDBs are configured independent of the actual use case. Examples of such settings include the length of individual snippets and the number of snippets returned by the VectorDB.

Read More
The total amount of information sent by the VectorDB (the average length of snippets multiplied by the number of snippets) is actually constrained by the GenAI’s context length or context window which is the total amount of information that can be sent to the GenAI from the VectorDB while leaving room for the user prompt and the response.

For example, if you are trying to answer questions based on a directory, you would want the VectorDB to return short snippets. This is because adjacent entries in a directory (think old school yellow books for example) do not contain relevant information. At the same time, because a directory might contain many examples related to the question, you want to return many individual snippets. The settings would be very different if you are trying to answer questions based on a news article. Here the snippets should be longer because adjacent sections of text typically contain related information but we may only have room for fewer snippets so as to not go over the constraints of the context window.

Aible runs a completely serverless VectorDB such that there can be different available-on-demand VectorDBs for individual use cases. This allows Aible users to use the right settings for each use case without affecting other use cases. Aible chat templates incorporate the best practices VectorDB settings for common use cases.
7-ChatAnalyticsAnomalyDetection-1

Chat Analytics & Anomaly Detection

Aible automatically monitors every chat interaction across clouds, models, users, use cases, etc. in a consistent way. All monitoring data is stored in the customer’s own Virtual Private Cloud (VPC). Aible automatically detects the most popular use cases, datasets, etc. and highlights underlying patterns of positive and negative feedback. Aible also auto-detects ‘anomalous prompts’ - cases where a user's prompts significantly differ from those of other users.

Read More
These can indicate a case where the relevant user is doing something problematic, but can also just mean they have adopted a new best practice others have not learnt yet. Organizations can review user specific anomalous prompt patterns to flag problematic use, recommend training, or promote best practices.
8_Chat-Template-Lineage

Chat Template Lineage

Aible Chat Templates encode best practices such as Large Language Model (LLM) settings, VectorDB settings, Prompt Augmentation, grounding instructions, etc. Aible includes default chat templates for common use cases such as document summarization, analytics, NLQ, etc. But organizations often want to customize the chat templates for the unique needs of their use cases. For example, we may create a Salesforce Lead Analytics chat template derived from the primary

Read More
analytics chat template that understands how Salesforce tags custom variables with _c. This enables better user experience because the LLM becomes aware of which variables are custom for the organization. A specific Salesforce customer may then derive further from the general Salesforce analytics chat template to incorporate information specific to their organization, such as the definition of their fiscal quarter. Quite soon you may end up with multiple chat templates derived from each other and you need a mechanism by which changes made in a parent template can percolate down to child templates. The Aible Chat Template Lineage provides that mechanism. It also makes it easy to see the actual lineage of any template just like you can easily see traditional data lineage. This is crucial for proper governance of this crucial genAI capability.

When underlying technology such as LLMs change, we need to update the Chat Templates to compensate for the change so that the use case works better than before, based on the updated tech. The Prompt Augmentation and recommended settings for GPT-3.5 are significantly different from those for GPT-4 for example. So when GPT-4 was released we had to update each of the relevant chat templates to make them work with GPT-4 as well. With the Aible Chat Template Lineage mechanism, we need to typically make such changes only once in the original parent template and all use-case-specific child templates derived from the parent immediately inherit all the improvements and can then start benefiting from the new technology. In the absence of this lineage technology IT organizations will find it really difficult to manually keep use cases updated as underlying technology changes.