You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We use Mastra with memory and it correctly creates the memory_messages table. However, there is no way to specify which index to use and always defaults to ivfflat with cosine distance, even though the vector code itself can take these options.
The main problem then stems from the fact that it calls the setupIndex regularly and tries to rebuild the index with different size. Unfortunately, we have so many messages that this times out and the chat regularly breaks down because of this issue. It would be great if we could configure which index the memory should use, so we could use hnsw and also inner product because OpenAI embeddings are normalized and performance would be better.