The better setup generate “semantic embeddings” that try to map how data stored relate to each other (by mapping how to it related within in its own weights and biases). That and knowledge graph look ups in which the links between different articles of data are evaluated in the same way.
The very expensive LLM portion really do just give rough aproximations of information language in that setup
The better setup generate “semantic embeddings” that try to map how data stored relate to each other (by mapping how to it related within in its own weights and biases). That and knowledge graph look ups in which the links between different articles of data are evaluated in the same way.
The very expensive LLM portion really do just give rough aproximations of information language in that setup