NHacker Next
login
▲Show HN: HelixDB – Open-source vector-graph database for AI applications (Rust)github.com
169 points by GeorgeCurtis 16 hours ago | 74 comments
Loading comments...
quantike 4 hours ago [-]
I spent a bit of time reading up on the internals and had a question about a small design choice (I am new to DB internals, specifically as they relate to vector DBs).

I notice that in your core vector type (`HVector`), you choose to store the vector data as a `Vec<f64>`. Given what I have seen from most embedding endpoints, they return `f32`s. Is there a particular reason for picking `f64` vs `f32` here? Is the additional precision a way to avoid headaches down the line or is it something I am missing context for?

Really cool project, gonna keep reading the code.

xavcochran 55 minutes ago [-]
thanks for the question! we chose f64 as a default for now as just to cover all cases and we believed that basic vector operations would not be our bottleneck initially. As we optimize our HNSW implementation, we are going to add support for f32 and binary vectors and drop using Vec<f64/f32> and instead use [f64/f32; {num_dimensions}] to avoid unnecessary heap allocation!
rohanrao123 12 hours ago [-]
Congrats on the launch! I'm one of the authors of that paper you cited, glad it was useful and inspiring to building this :) Let me know if we can support in any way!
GeorgeCurtis 12 hours ago [-]
Wow! I enjoyed reading it a lot and it was definitely inspiring for this project!

Would love to talk to you about it and make sure we capture all of the pain points if you're open to it? :)

rohanrao123 11 hours ago [-]
Absolutely, will DM you on X!
srameshc 9 hours ago [-]
I was thinking about intertwining Vector and Graph, because I have one specific usecase that required this combination. But I am not courageos or competent enough to build such a DB. So I am very excited to see this project and I am certainly going to use it. One question is what kind of hardware do you think this would require ? I am asking it because from what I understand Graph database performance is directly proportional to the amount of RAM it has and Vectors also needs persistence and computational resources .
GeorgeCurtis 6 hours ago [-]
The fortunate thing about our vector DB, like I mentioned in the post, is that we store the HNSW on disk. So, it is much less intense on your memory. Similar thing to what turbo puffer has done.

With regard to the graph db, we mostly use our laptops to test it and haven't run into an issue with performance yet on any size dataset.

If you wanna chat DM me on X :)

UltraSane 7 hours ago [-]
Neo4j supports vector indexes
GeorgeCurtis 6 hours ago [-]
Neo4j first of all is very slow for vectors, so if performance is something that matters for your user experience they definitely aren't a viable option. This is probably why Neo4j themselves have released guides on how to build that middleman software I mentioned with Qdrant for viable performance.

Furthermore, the vectors is capped at 4k dimensions which although may be enough most of the time, is a problem for some of the users we've spoken to. Also, they don't allow pre filtering which is a problem for a few people we've spoken to including Zep AI. They are on the right track, but there are a lot of holes that we are hoping to fill :)

Edit: AND, it is super memory intensive. People have had problems using extremely small datasets and have had memory overflows.

hbcondo714 15 hours ago [-]
Congrats! Any chance Helixdb can be run in the browser too, maybe via WASM? I'm looking for a vector db that can be pre-populated on the server and then be searched on the client so user queries (chat) stay on-device for privacy / compliance reasons.
xavcochran 51 minutes ago [-]
to add to George's reply, for helix to run on the browser with WASM the storage engine has to be completely in memory. At the moment we use LMDB which uses file based storage so that does't work with the browser. As George said, we plan on making our own storage engine and as part of that we aim to have an in-memory implementation.
GeorgeCurtis 15 hours ago [-]
Interesting, we've had a few people ask about this. So essentially you'd call the server to retrieve the HNSW and then store it in the browser and use WASM to query it?

Currently the road block for that is the LMDB storage engine. We have on our own storage engine on our roadmap, which we want to include WASM support with. If you wanna talk about it reach out to my twitter: https://x.com/georgecurtiss

tmpfs 12 hours ago [-]
This is very interesting, are there any examples of interacting with LLMs? If the queries are compiled and loaded into the database ahead of time the pattern of asking an LLM to generate a query from a natural language request seems difficult because current LLMs aren't going to know your query language yet and compiling each query for each prompt would add unnecessary overhead.
GeorgeCurtis 11 hours ago [-]
This is definitely a problem we want to work on fixing quickly. We're currently planning an MCP tool that can traverse the graph and decide for itself at each step where to go to next. As opposed to having to generate actual text written queries.

I mentioned in another comment that you can provide a grammar with constrained decoding to force the LLM to generate tokens that comply with the grammar. This ensures that only valid syntactic constructs are produced.

sitkack 7 hours ago [-]
Excellent work. Very exited to test this out. What are the limits or gotchas we should be aware of, or how do you want it pushed?

What other papers did you get inspiration from?

xavcochran 2 hours ago [-]
Thanks for the kind words! At the moment the query language transpilation is quite unstable but we are in the process of a large remodel which we aim to finish in the next day or so. This will make the query language compilation far more robust, and will return helpful error messages (like the rust compiler). The other thing is the core traversals are currently single threaded, so aggregating huge lists of graph items can take a bit of a hit. Note however, that we are also implementing parallel LMDB iterators with the help of the meilisearch guys to make aggregation of large results much faster.
huevosabio 14 hours ago [-]
Can I run this as an embedded DB like sqlite?

Can I sidestep the DSL? I want my LLMs to generate queries and using a new language is going to make that hard or expensive.

GeorgeCurtis 13 hours ago [-]
Currently you can't run us embedded and I'm not sure how you could sidestep the DSL :/

We're working on putting our grammar in llama's cpp code so that it only outputs grammatically correct HQL. But, even without that it shouldn't be hard or expensive to do. I wrote a Claude wrapper that had our docs in its context window, it did a good job of writing queries most of the time.

12 hours ago [-]
rationably 5 hours ago [-]
The fact that it's "backed by NVIDIA" and licensed under AGPL-3.0 makes me wonder about the cost(s) of using it in production.

Could you share any information on the pricing model?

GeorgeCurtis 2 hours ago [-]
We are open-source, so you can use and self host us for free. Our plan is to create a managed service (so long as all goes well) which shouldn't be priced any differently from other databases in the space.

We chose AGPL to make sure someone can't make a cloud hosted version of our product, think MongoDB on AWS a few years back.

anonymousDan 5 hours ago [-]
What would be a typical/recommended server setup for using this for RAG? Would you typically have a separate server for the GPUs and the DB itself?
xavcochran 1 hours ago [-]
Assuming you are using GPUs for model inference, the best way to set it up would have the DB and a separate server to send inference requests. Note that we plan on support custom model endpoints and on the database side so you probably won't need the inference server in the future!
youdont 11 hours ago [-]
Looks very interesting, but I've seen these kind of multi-paradigm databases like Gel, Helix and Surreal and I'm not sure that any of them quite hit the graph spot.

Does Helix support much of the graph algorithm world? For things like GrapgRAG.

Either way, I'd be all over it if there was a python SDK witch worked with the generated types!

GeorgeCurtis 1 hours ago [-]
We started as a graph database, so that's definitely the main thing we want to get right and we wan't to prioritise capturing all that functionality.

We have a python SDK already! What do you mean by generated types though?

Onawa 7 hours ago [-]
I have been happily using Gel (formerly EdgeDB) for a few projects. I'm curious what you think it is missing in regards to hitting the "graph spot"?
GeorgeCurtis 6 hours ago [-]
gel is a relational database, have you been building with it under a graph type philosophy?
dietr1ch 11 hours ago [-]
Graph DB OOMing 101. Can it do Erdős/Bacon numbers?

Graph DBs have been plagued with exploding complexity of queries as doing things like allowing recursion or counting paths isn't as trivial as it may sound. Do you have benchmarks and comparisons against other engines and query languages?

GeorgeCurtis 2 hours ago [-]
No, we are in the process of writing up some proper benchmarks. Our first user used us to build MuskMap and TrumpMap, which went viral on twitter. Not sure how it compared to other graph DBs at the time (bear in mind this was v1 and very bear bones), but it got latency of using Postgres >5s down to 50ms with us.
esafak 15 hours ago [-]
How does it compare with https://kuzudb.com/ ?
GeorgeCurtis 15 hours ago [-]
Kuzu don't support incremental indexing on the vectors. The vector index is completely separate and decoupled from the graph.

I.e: You have to re-index all of the vectors when you make an update to them.

Attummm 13 hours ago [-]
It sounds very intriguing indeed. However, the README makes some claims. Are there any benchmarks to support them?

> Built for performance we're currently 1000x faster than Neo4j, 100x faster than TigerGraph

GeorgeCurtis 13 hours ago [-]
Those were actual benchmarks that we run, we didn't get a chance to write them out properly before posting. I'll get on it now and notify by replying to this comment when they're on the readme :)
carlhjerpe 15 hours ago [-]
Nice "I'll have this name" when there's already the helix editor :)
GeorgeCurtis 15 hours ago [-]
First I'm hearing from it. The Beatles must've been super pissed when Apple took their name :(
carlhjerpe 14 hours ago [-]
https://crates.io/search?q=Helix

I'm surprised none in the team searched crates.io once before picking the name. Good luck!

itishappy 13 hours ago [-]
I don't think `helix-editor` is even on crates.io, just placeholders.

https://github.com/helix-editor/helix/discussions/7038

That being said, when I saw `helix-db` I was thrown too. "What's a text editor doing writing a vector-graph database, I thought they were working on plugins?"

GeorgeCurtis 14 hours ago [-]
we just started off as a side project and thought the name fitted well. With the strands, graph type structure, connections...

We didn't think of getting people to use it until we found it was solving a real pain point for people, so weren't worried about trademarks or names. There was no other helix db so that was good enough for us at the time.

tavianator 13 hours ago [-]
> There was no other helix db

https://en.wikipedia.org/wiki/Helix_(database)

GeorgeCurtis 13 hours ago [-]
There was no active one. We saw this and thought it would be a nice nod to history. We've actually spoken to some developers at apple who thought this was really neat :)
carlhjerpe 14 hours ago [-]
It's not the end of the world, just me being a bit grumpy. I mean it when I say good luck! :)
GeorgeCurtis 13 hours ago [-]
Thank you :)
bbatsell 14 hours ago [-]
I can't tell if this is droll sarcasm, but just in case not...

https://en.wikipedia.org/wiki/Apple_Corps_v_Apple_Computer

cormullion 14 hours ago [-]
perhaps it’s a homage to the famous Helix database (see Wikipedia)
GeorgeCurtis 13 hours ago [-]
well noted
J_Shelby_J 15 hours ago [-]
How do you think about building the graph relationships? Any special approaches you use?
GeorgeCurtis 15 hours ago [-]
Pretty much the same way you would with any graph DB, with the added benefit of being able to treat a vector as a node by creating those explicit relationships between them.

Does that answer your question properly?

wiradikusuma 6 hours ago [-]
"faster than Neo4j" How does it compare to Dgraph?
GeorgeCurtis 5 hours ago [-]
We don't have any benchmarks against them but from what I've just read about there bench marks, we should be just as good as them.

That is just heresy though, am interested myself now and will run some proper benchmarks

SchwKatze 15 hours ago [-]
Super cool!!! I'll try it this week and go back to give a feedback.
GeorgeCurtis 15 hours ago [-]
I look forward to it :)
javierluraschi 15 hours ago [-]
What is the max number of dimensions supported for a vector?
GeorgeCurtis 14 hours ago [-]
There is currently no cap. We will probably impose a similar cap to Qdrant or Pinecone some time soon ~64k. There's obviously a performance trade off as you go up, but we hope to massively offset this by doing binary quantisation within the next couple of months.
elpalek 14 hours ago [-]
What method/model are you using for sparse search?
GeorgeCurtis 13 hours ago [-]
We're going to use BM25. Currently it is just dense search. Coming very soon
elpalek 13 hours ago [-]
have you thought about SPALDE models? ex: https://arxiv.org/abs/2109.10086
GeorgeCurtis 13 hours ago [-]
Looks really interesting, I'll have a proper read. What would be your reasoning to incorporate this if we already have vector functionality and semantic search?
elpalek 11 hours ago [-]
my project deals w/ non-english text, bm25 performance is middeling. Language specific sparse model helps.
xavcochran 49 minutes ago [-]
We will definitely look into it. The SPLADE models look promising!
12 hours ago [-]
raufakdemir 12 hours ago [-]
How can I migrate neo4j to this?
GeorgeCurtis 12 hours ago [-]
We can build an ingestion engine for you :)

We've built SQL and PGVector ones already, just waiting for someone who could make use of other ones before we build them.

Let us know! Twitter in my bio

michaelsbradley 6 hours ago [-]
Can you do a compare/contrast with CozoDB?

https://github.com/cozodb/cozo

xavcochran 58 minutes ago [-]
apart from the fact Cozo seems to be pretty dead, we use a different storage engine which makes our reads much faster. based on their benchmarks I estimate our most of our reads to be 10x faster. I think our query language is much simpler, and easy to understand than Datalog which is what they use.
lennertjansen 12 hours ago [-]
how did you get it 3 OOMs faster than neo4j?
GeorgeCurtis 11 hours ago [-]
Partly because they're working with a monolith that I imagine is difficult to iterate on and it's written in Java. We've had the benefit of working on this in Rust which lets us get really nitty and gritty with different optimisations.

My friend who I worked on this with is putting together a technical blog on those graph optimisations so I'll link it here when he's done

xpe 10 hours ago [-]
On comparable benchmarks with comparable guarantees? Comparable persistence levels? I’m very skeptical.
GeorgeCurtis 1 hours ago [-]
Looking forward to putting you at ease :) Working on some proper benchmarks over the next few days.
riku_iki 11 hours ago [-]
How scalable is your DB in your tests? Could it be performent on graphs with 1B/10B/100B connections?
GeorgeCurtis 5 hours ago [-]
So far, we've tested it for up to ~10B connections and 50 odd million nodes. We didn't run in to any problems with it yet.
sync 16 hours ago [-]
Looks nice! Are you looking to compete with https://www.falkordb.com or do something a bit different?
GeorgeCurtis 15 hours ago [-]
Pretty much, our biggest focus is on Graph and Hybrid RAG. They seem to have really honed in on Graph RAG since the last time I checked their website.

One of the problems I know people experience with them is that they're super slow at bulk reading.

Oh also, they aren't built in Rust haha

mdaniel 14 hours ago [-]
> so much easier that it’s worth a bit of a learning curve

I think you misspelled "vendor lock in"

GeorgeCurtis 13 hours ago [-]
You can literally use us for free haha. There's not a language that properly encapsulates graph and vector functionality, so we needed to make our own. Also, we thought it was dumb that query languages weren't type-safe... So we changed that
basonjourne 13 hours ago [-]
why not surrealdb?
GeorgeCurtis 12 hours ago [-]
General consensus is it's really slow, I like the concept of surreal though. Our first, and extremely bare bones, version of the graph db was 1-2 orders of magnitude faster than surreal (we haven't run benchmarks against surreal recently, but I'll put them here when we're done)