Overcoming the logical limits of LLMs
At POMA we understand LLMs as maybe the most brilliant application of large-scale maths for a long time. This is also why we understand that some of their limits are logical and cannot be tackled by solutions not appreciating that logic. Especially hallucinations – contrary to some sadly proliferating beliefs – cannot logically be solved by re-training and fine-tuning only, but need sophisticated Retrieval Augmented Generation (RAG). This is where we come in.While there are many RAG approaches and solutions out there, only we can ensure that only the really relevant parts of your data are used as context for your LLM – saving you up to 99% of context window size and therefore cost.
Our products to put your data to use in LLMs
POMAindex
A ready-to-use alternative to RAG libraries/tools like LlamaIndex, harnessing the combined powers of Unstructured.io, pgVector, Pinecone, Cohere and OpenAI. No further API keys needed, just your POMA account. You simply ingest your data with us – as documents or even just URLs – and we will take care of the rest. You can also use your own Unstructured.io results and use your own LLMs, letting us manage the chunking and context retrieval for your own RAG.
POMAnonym
For projects necessitating pseudonymization of sensible PII, especially in the context of GDPR and trans-atlantic data transmission, POMAnonym enables you to benefit from the best available LLMs world-wide without compromising your own or your customers’ data protection levels. Our PII abstraction layer works on customer-wise clusters in your desired location and under only your control. Your sensible data never leaves your safe haven, but the LLM responses are ready-to-use after being retranslated through POMAnonym.
POMAcustom
Domain-specific custom builds using POMAindex and optionally POMAnonym – where we fully manage ingestion and querying.Already live:
BOPA.bot (Andorran law)…more to come soon!
Get in touch to learn more!
Please contact me:
Thank you!
We will contact you.