
solving chunking.
We believe in LLMs and RAG
At POMA we conceive LLMs as maybe the most brilliant application of large-scale maths for a long time. This is also why we understand that some of their limits are logical and cannot be tackled by solutions not appreciating that logic. Especially hallucinations — contrary to some sadly proliferating beliefs — cannot logically be solved by just enlarging, re-training and fine-tuning the LLMs, but need sophisticated Retrieval Augmented Generation (RAG).While there are many RAG approaches and solutions out there, our algorithm ensures that only the really relevant parts of a knowledge base are used as context in LLMs — saving up to 99% of input context (=energy =cost).
We are still stealth — but we can keep you posted
Made with 🍏 by TIGON S.L.U., Carrer Bellavista 16, 1°-4°, AD200 Encamp (Andorra)
Thank you!
We will contact you.