Large language models (LLMs) are capable of using large contexts, sometimes hundreds of thousands of tokens. OpenAIs GPT-4 is capable of handling inputs of up to 32K tokens, while Anthropic’s Claude AI can handle 100K context tokens. This enables LLMs to treat very large documents which can be very useful for question answering or information retrieval.

A newly released paper by Stanford University examines the usage of context in large language models, particularly long contexts for two key tasks: multi-document question answering and key-value retrieval. Their findings show that the best performance is typically achieved when relevant information occurs at the beginning or end of the input context. However, the performance of models significantly declines when they need to access relevant information in the middle of long contexts.This could be attributed to the way humans write, where the beginning and concluding segments of text mainly contain the most crucial information.

These findings show that one needs to be careful when using LLMs for search and information retrieval in long documents. Information found in the middle might be ignored by the LLM and hence wrong or less accurate responses will be provided.

Lingua Custodia has over 10 years of experience in language technologies for financial document processing and we are very aware of the importance of context for search and information retrieval sentiment analysis, content summary and extraction. We continuously study the impact of context size of these language models

Our expert team consists of scientists, engineers and developers, so we are well placed to create, customise and design secure LLMs which are perfectly tailored to meet your business needs.

Recommended Posts

No comment yet, add your voice below!

Add a Comment

Your email address will not be published. Required fields are marked *