Yes a thousand percent! I'm working on this too. I'm sick of everyone trying to come up with a use case to get all my data in everyone's cloud so I have to pay a subscription fee to just make things work. I'm working on a fitness tracking app right now that will use the sublime model - just buy it, get updates for X years, sync with all your devices and use it forever. If you want updates after X years buy the newest version again. If its good enough as is - and that's the goal - just keep using it forever.
This is the model I want from 90% of the software out there, just give me a reasonable price to buy it, make the product good, and don't marry it to the cloud so much that its unusable w/out it.
There are also a lot of added benefits to this model in general beyond the data privacy (most are mentioned in the article), but not all the problems are solved here. This is a big space that still needs a lot of tooling to make things really easy going but the tech to do it is there.
Finally, the best part (IMHO) about local-first software is it brings back a much healthier incentive structure - you're not monetizing via ads or tracking users or maxing "engagement" - you're just building a product and getting paid for how good it is. To me it feels like its software that actually serves the user.
patmorgan23 22 hours ago [-]
Obsidian the note taking app is a great model to follow as well. The client is completely free and they sell an optional syncing service. The notes are all on markdown files so the client is completely optional.
crossroadsguy 12 hours ago [-]
This is the reason I have always refused to use Bear note taking app irrespective of how good and snappy that app is. Because they keep their notes in a SQLite db now and even though that file can be backed up and handled locally my notes are not easily accessible to me. I can't easily edit my notes in other editors (which I often like to do on my mac), I can't version controlled backup and sync those files the way I want outside of iCloud (which is what Bear uses).
What is sad is that they used to be local files first note app and then they moved to sqlite citing some sync and performance issues.
throw10920 4 hours ago [-]
> What is sad is that they used to be local files first note app and then they moved to sqlite citing some sync and performance issues.
They're still a local-first note application. It's just slightly harder for you to edit your notes externally, and not even by that much - it's very easy to directly query (read and write) SQLite databases, and if you really cared you could have made a script to grab a note, export it to a temporary text file, allow you to edit it, then update the SQLite database.
> I can't version controlled backup and sync those files
> then they moved to sqlite citing some sync and performance issues
Yes, that's because "plain text" files are bad for performance and harder to sync correctly. For people who (1) have over a hundred thousand notes they want to keep (like me) and (2) want maximum confidence that they're not going to lose years worth of work, that's incredibly important.
The devs made the right choice. You can always write scripts to interface with a SQLite database with an external editor. You can't take plain text files and magically make them as fast and durable as a database.
fauigerzigerk 2 hours ago [-]
>and if you really cared you could have made a script to grab a note, export it to a temporary text file, allow you to edit it, then update the SQLite database.
Bear devs advise against doing that: "Generally speaking, it is safe to access the database for reading only"
Also interesting to note that you couldn't actually muster a coherent response to any of my points and just had to make ad hominem attacks and emotional outbursts.
canadiantim 7 hours ago [-]
Sqlite is still local first. Couldn’t they just also provide you with notes via obsidian-like file folder structure while using sqlite for in-app performance?
crossroadsguy 5 hours ago [-]
That was exactly my thought. But no I guess they won't. Because they had good old plain text files but they changed to sqlite db and apparently the reason they gave is sync related challenges. Since it's not a FOSS app I don't know the challenges and I feel all other text based notes/editor apps that work on plain text files on disk must be doing something right/different. I don't know.
And as for sqlite being local first - it's not the same. It's the nature of the data a plain text note taking app deals with that neuters that very idea of those plain text notes by storing in a local sqlite db.
agos 11 hours ago [-]
I didn’t know they did this change which means it’s time to think about migrating away from bear. Which is a pity because the software itself is rock solid
crossroadsguy 5 hours ago [-]
Yes, I feel the same. A rock solid app dismantled by just one developer choice. Still I won't diss the devs because it's their choice but "a text note" is the last thing I would want "locked" in a sqlite db and bear is a "plain text note taking app" really. So just sad.
nylonstrung 3 hours ago [-]
Is SQLite really "locked in"?
It's possible most ubiquitous open source software ever (far more common than markdown) and your notes exist in fully readable text form inside it
Bear being proprietary is the real threat of lock in
strken 5 hours ago [-]
From memory the sqlite3 API is something like
sqlite3 data.db 'select text from notes'
I use sqlite3 to load and query random CSVs all the time. It feels a bit weird to hear data described as "locked" within a SQLite DB, because it's one of the simplest non-text formats for me to read data from. It would surprise me of it took more than five minutes to set up a one-way cron job to dump your notes into plain text files.
crossroadsguy 2 hours ago [-]
My god! Really? What is this subthread now - bear app fan version of discussions.apple.com?
This is just unbelievable! Fucking pitchforks are out literally!
I am dealing with plain text notes and you all want me to write sql queries and scripts to access those fucking text files?
Are you all (these few people who just jumped in the subthread) pulling some sort of prank of so?
nerdyadventurer 17 hours ago [-]
> get updates for X years, sync with all your devices and use it forever. If you want updates after X years buy the newest version again. If its good enough as is - and that's the goal - just keep using it forever.
While this sounds good deal, with this approach
- You have to charge total cost of subscription at once (1y or 2y),
- Still have to keep servers running for syncing, also you have think about cases where user syncing 1y of data in a single day.
- Have to keep people on the payroll for future developments.
(You are here thinking only in developer perspective.)
maxhille 24 hours ago [-]
How do you plan to do the syncing without some sort of cloud infrastructure?
jmb99 18 hours ago [-]
The benefit of local-first means you’re not incentivized to sell your cloud offering, so you can just give options. Sync with iCloud, Google drive, OneDrive, Dropbox, Mega, SMB, SFTP, FTP, whatever you feel like adding support for. And since local-first usually means having some kind of sane file format, you can let “advanced” users manage their own files and synchronization like people have been doing for the last 50 years.
CGamesPlay 22 hours ago [-]
There are a lot of valid answers to this! One is to use your platform's provided one, like OneDrive or iCloud. Another is to integrate with some other sync platform. Dropbox is a popular target for this. Peer-to-peer is another, although that obviously also come with limitations. Finally, bring-your-own-sync is a popular choice amongst open-source apps, where you provide a self-hostable sync server.
jen20 6 hours ago [-]
Note that this thread is full of people claiming that using SQLite with iCloud sync is evidence of some conspiracy theory or other!
WD-42 20 hours ago [-]
Check out Aardvark (renamed to reflection) it's a collaborative note-taking app from the GNOME folks. I think the idea isn't to completely remove cloud infrastructure, but to at least make it optional and/or provide alternatives. For example, this note app works via P2P. blogs.gnome.org/tbernard/2025/06/30/aardvark-summer-2025-update/
piperswe 24 hours ago [-]
Something like Syncthing, perhaps?
dsp_person 21 hours ago [-]
Anyone know of any mobile apps that have done this and bundled their own fork of syncthing under the hood for syncing?
FallCheeta7373 17 hours ago [-]
Practically not really needed for a person going out of their way to setup syncthing, you can just sync the underlying folder, I do this with logseq, their syncing subscription is paid, I just sync the underlying logseq graph and markdown syntax. It's seamless and rarely disturbs me, works well in background, although android seemingly doesn't respect my background preferences, and clears it out of my ram when I inevitably hit the clear button, but that's soluble by simply rebooting once in a while.
Hard_Space 11 hours ago [-]
For Joplin I use WebDav from the 10gb of free file storage that comes with Fastmail. So I have easy sync with multiple platforms and form factors, and even substantial notes make little dent in the allowance.
pvh 16 hours ago [-]
Ideally, you would use existing commodity infrastructure but we have found none of it is really super fit for our purposes. Failing that, we have been developing an approach to low-maintenance reusable infrastructure. For now, I would advise running your own but positioning yourself to take advantage of commodity systems as they emerge.
j45 19 hours ago [-]
Syncthing
MajesticHobo2 19 hours ago [-]
You can use FTP and SVN.
cortesoft 16 hours ago [-]
Both of those require a server
rschiavone 21 hours ago [-]
There's a git plugin.
DataDaoDe 24 hours ago [-]
right now its in webrtc
spintin 22 hours ago [-]
[dead]
wim 13 hours ago [-]
A backend can be part of the functionality though, such as for real-time collaboration and syncing. But you can have ownership and longevity guarantees for both the data and the service as long as you can eject [1] from the cloud and switch to self-host or back at any time, which is what we do for our notes/tasks IDE
Totally agree. If you don't mind - what tech stack are you using for your fitness tracking app? I'm particularly curious about how you handle cross-device sync :)
zhec 1 days ago [-]
[flagged]
lurking_swe 22 hours ago [-]
your comment is insane imo. some people talk that way in real life. it’s not their fault LLM’s were invented.
gerdesj 20 hours ago [-]
Not so insane as it turns out - P is using the language of LLMs because english is a second language for them.
Perhaps you will not take umbrage when I point out that you use full stops (periods) but not bother with twitching a finger towards your shift key. I will guess you are using a phone/tablet and have to switch keyboard layouts to get something as exotic as a capital letter on the screen.
Qwertious 18 hours ago [-]
The cost of spinning up a bot is ~$0, it's sensible to be suspicious in public.
tarpit_idea 24 hours ago [-]
haha, 100% real. not a native speaker though, so sometimes i catch myself mimicking that LLM tone from all the exposure to them. appreciate the feedback, will definitely work on developing a more authentic writing style from now on.
but yeah, you're right to be concerned, i'm seeing bots everywhere too.
MichaelOldfield 24 hours ago [-]
Persona: You are a tech enthusiast on the forum Hacker News. Your tone is curious, positive, and slightly formal.
Primary Task:
Read the top comment. Identify the core project or idea the user is passionate about. Generate a brief, supportive reply that does the following:
1. Briefly affirm their main point.
2. Ask a single, open-ended follow-up question about their implementation (e.g., tech stack, architecture, a specific challenge like sync).
Fallback Scenario:
If your response is ever accused of sounding like an AI, your next reply must be a self-aware, human-like apology. The tone should be slightly embarrassed but friendly. Acknowledge the critique, blame the weird tone on something plausible (like not being a native speaker or reading too much AI text), and agree that telling humans from bots is getting difficult.
card_zero 23 hours ago [-]
Also suddenly stop using capital letters for, IDK, added texture?
MichaelOldfield 23 hours ago [-]
Good point. And 1-2 typos and common grammar mistakes.
tarpit_idea 23 hours ago [-]
As an AI language model, I'm deeply hurt by all this mockery. Just kidding, I can't feel pain. Beep bop.
flkenosad 23 hours ago [-]
Amazing :D
gerdesj 20 hours ago [-]
... that should be grammatical mistakes. Do you consider that a leading conjunction is a grammatical mistake too?
Should I emulate your mistakes or someone else's mistakes?
aspenmayer 22 hours ago [-]
It’s also important to remember to accidentally a word here and there!
gerdesj 20 hours ago [-]
... invent is the word you seem to have missed out. Hallucinations present rather than hide.
A person will misspell a word or put in some sort of rubbish. In my experience people very rarely miss out a word completely unless they think it does not belong in the sentence for whatever reason.
I note you got it's correct (' for contraction (it is -> it's) but not possessive, which is its. Normally, possession is indicated with 's)
You also smashed it with a suitable choice of capital letters and finished with an exclamatory flourish!
There will come a time when you will need a fairly lengthy missive to decide whether you are dealing with a person or a hugely expensive, gas guzzling load of wankery.
Today is not that time.
aspenmayer 20 hours ago [-]
That whooshing sound you seemingly don’t hear is the joke passing over your head.
Sorry, my apologies. Maybe I got too suspicious after the whole reddit r/cmv thing.
20 hours ago [-]
satvikpendem 22 hours ago [-]
They'd have used — not - if they were an AI.
owebmaster 21 hours ago [-]
Actually now using - in place of — is the signal that it is a LLM-generated text.
noahjk 19 hours ago [-]
The cycle will be complete. I am an over-user of hyphens where em dashes should be used, and my misuse worked out well to differentiate - but now I'm just another LLM, I guess. Well, I actually learned to appreciate en and em dashes after learning more about them since the proliferation of ems, so maybe I can finally switch to them without feeling judged for outsourcing my work?
fragmede 24 hours ago [-]
continuing the conversation by asking a question is now an LLM tell on a 4 sentence comment? I'm sorry but that's inane.
fuzzfactor 5 hours ago [-]
What if you are an old man and more clouds than ever are appearing which deserve a good fist shaking?
Asking for a friend . . .
23 hours ago [-]
charcircuit 1 days ago [-]
>you're not monetizing via ads
Yes, you are. You can find tons of purely local apps that monetize themselves with ads.
DataDaoDe 1 days ago [-]
Sure you could. I'm not, I don't think its in the spirit of local first. And I wouldn't pay money for that, but if you or someone else wants to build that kind of software - its a free world :)
criddell 1 days ago [-]
It’s easy to say you wouldn’t do that, but if it gets to the point where you have an employee helping you out and in a downturn you have to choose between laying them off or pushing an ad to keep paying them one more quarter, you might reconsider.
nofunsir 24 hours ago [-]
No, ads aren't the solution for everything, and in my opinion anything.
earthnail 13 hours ago [-]
You will reconsider this argument when you start publishing your own ads to make people aware of your software.
there are different kinds of ads, but lets be clear that even a Show HN is a form of ad. Some forms of ads are just more appreciated than others.
thaumasiotes 1 days ago [-]
> You can find tons of purely local apps tha[t] monetize themselves with a[d]s.
How do they do that without hitting the internet?
kid64 1 days ago [-]
It's "local first", not "local only".
thaumasiotes 24 hours ago [-]
Sorry, a "purely local app" isn't "local only"?
senko 13 hours ago [-]
Not OP, but no.
IMHO, a fully local app is an app that can run locally with all the functionality, not that it's isolated from everything else.
Browser, email client (running locally on your device such as Mail.app, mutt, Outlook,...), Zed (text editor, runs locally but can check for updates... as can many other modern apps)...
satvikpendem 22 hours ago [-]
You can hardcode ads into each build that don't need Internet access.
kid64 20 hours ago [-]
Well if you're gonna get all accurate on me...
free_bip 23 hours ago [-]
i could be wrong but I think they're referring to the winrar model, where there are occasional "annoyances" that you can either ignore or pay to get rid of.
charcircuit 1 days ago [-]
Point 3 from the article is
>3. The network is optional
Ad SDKs usually allow caching ads for a period of time so that ads can still be shown while the device is temporarily offline.
fud101 15 hours ago [-]
Bro who wants your pointless fitness data? Not even you care that much for that. Just use a notepad ffs.
johannes1234321 9 hours ago [-]
Fitness data tells a lot, your health status, your daily schedule, with running/cycling/... your exact whereabouts that is quite some valuable information.
A notepad also isn't enough to correlate heart rate etc to specific exercises and plotting over time
fud101 8 hours ago [-]
Tell me more. Lol. I just did a 10k run, I tracked it with my watch but tell me how any of that matters to anyone except me (and it doesn't even matter to me what my HR was over that run - though i did use a HRM but mainly to keep myself from over-exertion). I really don't understand what fitness apps are supposed to do, they're possibly the most useless thing ever invented. I wrote my own app in Clojure over a decade ago and used it to track my workouts for a year or two, I never ever go back and look at a workout more than a week ago, maybe 2 weeks at the most, it simply isn't good data, it is the least valuable data one can generate.
bravesoul2 7 hours ago [-]
Funny. I keep a gym record in a local note on the phone. Not for the data. For the act of recording it. Never look back.
Only useful thing from Garmin app has been comparing heart rates to a year ago.
fud101 6 hours ago [-]
Act of recording is right. Pen and paper is efficient and you get a safe way to archive it once you fill the book up, it can go into storage or the bin. It's fool proof, doesn't have bugs or network timeouts, costs nothing and will give all the benefits of journaling without the downsides of digital distractions. I hate how every tech enthusiast thinks their addiction to technology is of benefit to humanity at large.
eliasson 5 hours ago [-]
I used to log my weight lifting in an app but I found it really distracting and time consuming so I gave up.
For the last few weeks I have been bringing a small notebook and doing my logging by pen and paper instead. I find it much more relaxing and I only use it to check what weights I used last time, so I don't really miss anything.
Pen and paper is severely underrated today.
echelon 24 hours ago [-]
> I'm sick of everyone trying to come up with a use case to get all my data in everyone's cloud so I have to pay a subscription fee to just make things work.
AI photo and video generation is impractical to run locally.
ComfyUI and Flux exist, but they serve a tiny sliver of the market with very expensive gamer GPUs. And if you wanted to cater to that market, you'd have to support dozens of different SKUs and deal with Python dependency hell. And even then, proficient ComfyUI users are spending hours experimenting and waiting for renders - it's really only a tool for niche artists with extreme patience, such as the ones who build shows for the Las Vegas Sphere. Not your average graphics designers and filmmakers.
I've been wanting local apps and local compute for a long time, but AI at the edge is just so immature and underpowered that we might see the next category of apps only being available via the cloud. And I suspect that these apps will start taking over and dominating much of software, especially if they save time.
Previously I'd only want to edit photos and videos locally, but the cloud offerings are just too powerful. Local cannot seriously compete.
satvikpendem 22 hours ago [-]
But who said anything about AI? Lots of local-first apps have nor need any AI whatsoever. And by the way, Topaz Labs has good offerings for editing photos and videos with AI that run locally, works great for many use cases (although it's not fully generative like Veo etc, more like upscaling and denoising, which does use generative AI but not like the former).
bigfatkitten 20 hours ago [-]
Most cloud apps have no need for AI either, but companies are pushing it anyway for bullshit marketing reasons, similar to what they did with blockchain a decade ago.
satvikpendem 15 hours ago [-]
Sure, that's unrelated to my point however, it's a non sequitur.
echelon 22 hours ago [-]
I suspect that most content will be generated in the future and that generation will dominate the creative fields, white collar work, and most internet usage.
If that's true, it's a substantial upset to the old paradigms of data and computing.
satvikpendem 22 hours ago [-]
Yes, that is true, but again for apps like a fitness tracker, it is not "content" based. Sure, it might have some AI in the form of chatbots to ask what your diet plan should be based on your current progress, but that's not what you're talking about. In my experience, most local-first apps are like this fitness tracker, utility tools, rather than a means to view content, like TikTok.
echelon 19 hours ago [-]
The vast majority of apps, or at least data consumption, will not fit the shape of "fitness tracker". Budgeting, emails [1], workout routines - those will fall into a non-generative bucket of applications.
I still purport that in the future, most applications and screen time will fall into a generative AI bucket: creating media, writing code, watching videos, playing games, searching for information, etc. I wouldn't even be surprised if our personal images and videos get somehow subsumed and "enriched" with AI.
[1] Well, email might fall into a non-generative bucket. There are already tools that purport to both read and write your emails for you. I'm not quite sure what to make of those.
satvikpendem 15 hours ago [-]
> or at least data consumption
Good thing I'm not talking about data consumption apps then, as I mentioned in my comment above. Local-first apps specifically are not amenable to data consumption purposes so while you are right on the generative AI part, it's unrelated to the topic of this post.
flkenosad 23 hours ago [-]
> AI photo and video generation is impractical to run locally.
You think it always will be? What can the new iPhone chips do locally?
bananaboy 21 hours ago [-]
Regardless of what hardware capabilities exist, the previous post makes it sound like every application needs AI which is just not true.
echelon 23 hours ago [-]
> You think it always will be? What can the new iPhone chips do locally?
I suspect we're a decade off from being able to generate Veo 3, Seedance, or Kling 2.1 videos directly on our phones.
This is going to require both new compute paradigms and massively more capable hardware. And by that time who knows what we'll be doing in the data center.
Perhaps the demands of generating real time fully explorable worlds will push more investment into local compute for consumers. Robotics will demand tremendous low latency edge compute, and NVidia has already highlighted it as a major growth and investment opportunity.
samwillis 1 days ago [-]
There is now a great annual Local-first Software conference in Berlin (https://www.localfirstconf.com/) organised by Ink and Switch, and it's spawned a spin out Sync Conf this November in SF (https://syncconf.dev/)
There was a great panel discussion this year from a number of the co-authors of the the paper linked, discussing what is Local-first software in the context of dev tools and what they have learnt since the original paper. It's very much worth watching: https://youtu.be/86NmEerklTs?si=Kodd7kD39337CTbf
The community are very much settling on "Sync" being a component of local first, but applicable so much wider. Along with local first software being a characteristic of end user software, with dev tools - such as sync engines - being an enabling tool but not "local first" in as much themselves.
It's an exciting time for the local-first / sync engine community, we've been working on tools that enable realtime collaborative and async collaborative experiences, and now with the onset of AI the market for this is exploring. Every AI app is inherently multi user collaborative with the agents as actors within the system. This requires the tech that the sync engine community has been working on.
trinsic2 13 hours ago [-]
Thanks for the info. Didn't know there were sync conferences.
Jtsummers 1 days ago [-]
Worth a read, and it's had some very active discussions in the past:
I've always thought that this article overstates the promise of CRDTs with regard to conflict resolution. For toy cases like a TODO list, yes, you can define your operations such that a computer can automatically reconcile conflicts - e.g. you only support "add" and "mark as complete", and if something gets marked as complete twice, that's fine.
But once you get past toy examples, you start wanting to support operations like "edit", and there generally isn't a way to infer the user's intent there. Like, if my cookie recipe starts with 100g of sugar, and I modify it on my phone to use 200g of sugar, and I modify it on my desktop to use 150g of honey instead of 100g of sugar, there are a bunch of ways to reconcile that:
1. Stick with 200g of sugar, drop the 1.5x honey substitution.
2. Stick with 150g of honey, drop the 2x.
3. Merge them - 300g of honey.
4. Merge them - 150g of honey and 50g of sugar.
There's no way for any automated system to infer my intent there. So you've got to either:
1. Ask the user to resolve the conflict. This means you have to build out the whole "resolve this merge conflict for me" UI and the promise of "conflict-free" has not been fulfilled.
2. Arbitrarily choose an option and silently merge. This risks badly surprising the user and losing changes.
3. Arbitrarily choose an option, but expose the fact that you've auto-resolved a conflict and allow the user to manually re-resolve. This requires even more UI work than option 1.
4. Constrain your data model to only allow representing intents that can be deterministically resolved. In practice I think this is too severe of a constraint to allow building anything other than toy apps.
IMO #1 and #3 are the least-bad options, but I don't think they're consistent with the expectations you'd have for CRDTs after reading this article.
Anything with online dependencies will necessarily require ongoing upkeep and ongoing costs. If a system is not local-first (or ideally local-only), it’s not designed for long-term dependability.
Connected appliances and cars have got to be the stupidest bit of engineering from a practical standpoint.
bravesoul2 7 hours ago [-]
Yeah Dropbox Apple etc. provide enough free or paid storage that shows you the true cost. Circa $10 for 2Tb. Cloudflare let's you host static files pretty much for free. Or cost is rounding error.
So you can run 1000 local first app that syncs to a Dropbox for that 10/m in storage. And that storage is full B2C level ready to go not some low level s3 like primitive. Has auth, has supported has programs to sync.
Really most of the cloud cost is not needed.
api 1 days ago [-]
The entire thing is because of subscription revenue.
It’s self reinforcing because those companies that get subscription revenue have both more revenue and higher valuations enabling more fund raising, causing them to beat out companies that do not follow this model. This is why local first software died.
tikhonj 1 days ago [-]
I remember seeing somebody summarize this as "SaaS is a pricing model" or "SaaS is financialization" and it totally rings true. Compared to normal software pricing, a subscription gives you predictable recurring revenue and a natural sort of price discrimination (people who use your system more, pay more). It's also a psychological thing: folks got anchored on really low up-front prices for software, so paying $2000 for something up-front sounds crazy even if you use it daily for years, but paying $25/month feels reasonable. (See also how much people complain about paying $60 for video games which they play for thousands of hours!)
It's sad because the dynamics and incentives around clear, up-front prices seem generally better than SaaS (more user control, less lock-in), but almost all commercial software morphs into SaaS thanks to a mix of psychology, culture and market dynamics.
There are other advantages to having your software and data managed by somebody else, but they are far less determinative than structural and pricing factors. In a slightly different world, it's not hard to imagine relatively expensive software up-front that comes with a smaller, optional (perhaps even third-party!) subscription service for data storage and syncing. It's a shame that we do not live in that world.
danjl 1 days ago [-]
Correct. SaaS is a business model, not a technical concept. But the real problem is that there is no equivalent business model for selling local first software. Traditional desktop apps were single purchase items. Local first is not because you just navigate to a website in your browser and blammo you get the software. What we need is a way to make money off of local first software.
Timwi 12 hours ago [-]
> What we need is a way to make money off of local first software.
No, what we need is a way for people to not starve so that they don't have to make money at all and can focus instead on their passion project(s). Cough UBI cough
_heimdall 2 hours ago [-]
I've never understood the end goal of a UBI. If the expectation is that everyone should be able to eat (seems like a noble goal), why obfuscate that by giving people money rather than access to free food?
If we really wanted a system where we deem certain items essential and went everyone to have access to them, it makes no sense to pay for them. Money may still make sense for nonessential or luxury items, but it just gets in the way if the government has to give me money so I can go spend it on the food they actually want me to have.
airstrike 5 hours ago [-]
Just need to find a lamp with a genie inside first.
gffrd 23 hours ago [-]
> there is no equivalent business model for selling local first software.
Sure there is: “$500 upfront or $21/mo for 24 months *”
* if you don’t complete you 24 payments, we freeze your license.
HappMacDonald 20 hours ago [-]
So Local-first DRM then?
0x445442 6 hours ago [-]
This is the canonical example I believe. The product is ~35 years old.
I'm not understanding why we have to have a model that replicates SaaS pricing for local-first software?
Obsidian is doing a pretty good job selling sync functionality to their free client. Because the have a really good markdown editor implementation IMHO with community plug-in support that IMHO beats every PKM cloud tool out there that competes with them.
flomo 24 hours ago [-]
It's the missing middle. A manager can just expense $25/mo, while $2000 requires an approval process, which requires outside sales, which means it really costs at least $20,000.
3eb7988a1663 23 hours ago [-]
Ha! If only that were true. I gave up on my effort to buy a one year license for $25 after filling out too many TPS reports. Which is probably part of the design of the system.
api 1 days ago [-]
SaaS is a business model. Cloud is DRM. If you run the software in the cloud it can't be pirated and there is perfect lock-in. Double if the data can't be exported.
Related: I've been incubating an idea for a while that open source, as it presently stands, is largely an ecosystem that exists in support of cloud SaaS. This is quite paradoxical because cloud SaaS is by far the least free model for software -- far, far less free than closed source commercial local software.
seec 23 hours ago [-]
Yes, this is the main reason for doing "cloud" I believe. Otherwise, it would make no sense for someone like Adobe to adopt this model, since the softwares still largely require to run locally for technical reasons.
It's the same thing as the subscriptions for movies like Netflix, except at least in the last case we can fight back with various means (and it's not a necessity).
The SaaS model is basically a perfect racketeering setup, I think it should be outlawed at least philosophically. There is no way business is not going to abuse that power and they have already shown as much...
I agree with your sentiment on Open Source. I think like many of these types of things, it lives in contradictions. In any case, Linux as it is today, couldn't exist without the big commercial players paying quite a bit to get it going.
bboygravity 1 days ago [-]
The root cause of the problem is that it's easier to make personalized stuff with server/backend (?cloud?) than without maybe?
Example: I made a firefox extension that automatically fills forms using LLM. It's fully offline (except OPTIONALLY) the LLM part, optionally because it also supports Ollama locally.
Now the issue is that it's way too hard for most people to use: find the LLM to run, acquire it somehow (pay to run it online or download it to run in Ollama) gotta configure your API url, enter API key, save all of your details for form fulling locally in text files which you then have to backup and synchronize to other devices yourself.
The alternative would be: create account, give money, enter details and all is synced and backedup automatically accross devices, online LLM pre-selected and configured. Ready to go. No messing around with Ollama or openrouter, just go.
I don't know how to solve it in a local way that would be as user friendly as the subscription way would be.
Now things like cars and washing machines are a different story :p
tshaddox 1 days ago [-]
> The root cause of the problem is that it's easier to make personalized stuff with server/backend (?cloud?) than without maybe?
That, and also there are real benefits to the end user of having everything persisted in the cloud by default.
goopypoop 20 hours ago [-]
I don't think having to manually sync preferences (or set up an unnecessary LLM) is really "the root cause" of "why local first software died".
okr 1 days ago [-]
Can the LLM not help with setting up the local part? (Sorry, was just the first thought i had.)
bboygravity 9 hours ago [-]
Not sure why you're downvoted. This would probably work in the future when LLM's are a standard part of every OS I supposed.
But by then my extension probably also won't be needed anymore as it would likely be integrated in the OS.
seec 24 hours ago [-]
Pretty much greed being a universally destructive force in the world as usual.
When Apple joined the madness, all hopes where lost (that was a long time ago now, sight)
rshemet 3 hours ago [-]
Interesting to think about the concept of local-first in the age of AI.
Wanting to be able to run AI fully privately, and offline is the reason we created Cactus:
Fully open-source, cross-platform & blazing-fast; lets you plug in private AI into any app on your phone.
ashdev 1 days ago [-]
This was refreshing to read! More apps should be local-first. If the user does not want to sync their data to cloud, they should have that option.
I’ve been building the offline-first (or local-first) app Brisqi[0] for a while now, it was designed from the ground up with the offline-first philosophy.
In my view, a local-first app is designed to function completely offline for an indefinite period. The local experience is the foundation, not a fallback and cloud syncing should be a secondary enhancement, not a requirement.
I also don’t consider apps that rely on temporary cache to be offline-first. A true offline-first app should use a local database to persist data. Many apps labeled as “offline-first” are actually just offline-tolerant, they offer limited offline functionality but ultimately depend on reconnecting to the internet.
Building an offline-first app is certainly more challenging than creating an online-only web app. The syncing mechanism must be reliable enough to handle transitions between offline and online states, ensuring that data syncs to the cloud consistently and without loss. I’ve written more about how I approached this in my blog post[1].
How has it been going? I've been thinking of trying this model but a bit worried about how much harder it would be to make it sustainable as a business
GMoromisato 1 days ago [-]
Personally, I disagree with this approach. This is trying to solve a business problem (I can't trust cloud-providers) with a technical trade-off (avoid centralized architecture).
The problems with closed-source software (lack of control, lack of reliability) were solved with a new business model: open source development, which came with new licenses and new ways of getting revenue (maintenance contracts instead of license fees).
In the same way, we need a business model solution to cloud-vendor ills.
Imagine we create standard contracts/licenses that define rights so that users can be confident of their relationship with cloud-vendors. Over time, maybe users would only deal with vendors that had these licenses. The rights would be something like:
* End-of-life contracts: cloud-vendors should contractually spell out what happens if they can't afford to keep the servers running.
* Data portability guarantees: Vendors must spell out how data gets migrated out, and all formats must be either open or (at minimum) fully documented.
* Data privacy transparency: Vendors must track/audit all data access and report to the user who/what read their data and when.
I'm sure you can think of a dozen other clauses.
The tricky part is, of course, adoption. What's in it for the cloud-vendors? Why would they adopt this? The major fear of cloud-vendors is, I think, churn. If you're paying lots of money to get people to try your service, you have to make sure they don't churn out, or you'll lose money. Maybe these contracts come only with annual subscription terms. Or maybe the appeal of these contracts is enough for vendors to charge more.
AnthonyMouse 1 days ago [-]
> This is trying to solve a business problem (I can't trust cloud-providers) with a technical trade-off (avoid centralized architecture).
Whenever it's possible to solve a business problem or political problem with a technical solution, that's usually a strong approach, because those problems are caused by an adversarial entity and the technical solution is to eliminate the adversarial entity's ability to defect.
Encryption is a great example of this if you are going to use a cloud service. Trying to protect your data with privacy policies and bureaucratic rules is a fool's errand because there are too many perverse incentives. The data is valuable, neither the customer nor the government can easily tell if the company is selling it behind their backs, it's also hard to tell if he provider has cheaped out on security until it's too late, etc.
But if it's encrypted on the client device and you can prove with math that the server has no access to the plaintext, you don't have to worry about any of that.
The trouble is sometimes you want the server to process the data and not just store it, and then the technical solution becomes, use your own servers.
GMoromisato 24 hours ago [-]
I 100% agree, actually. If there were a technical solution, then that's usually a better approach.
For something like data portability--being able to take my data to a different provider--that probably requires a technical solution.
But other problems, like enshittification, can't be solved technically. How do you technically prevent a cloud vendor from changing their pricing?
And you're right that the solution space is constrained by technical limits. If you want to share data with another user, you either need to trust a central authority or use a distributed protocol like blockchain. The former means you need to trust the central provider; the latter means you have to do your own key-management (how much money has been lost by people forgetting the keys to their wallet?)
There is no technical solution that gets you all the benefits of central plus all the benefits of local-first. There will always be trade-offs.
AnthonyMouse 8 hours ago [-]
Listing key management as the thing that makes distributed protocols hard seems like an error. If your stuff is in the cloud, what are you using to access it? Some kind of password, TOTP, etc., which is maybe tied to your email, which itself is tied to some password, TOTP, etc. So what happens if you lose access to your email or whatever they're using for password recovery? You lose all your stuff.
But it's even worse in that case, because that can also happen if they mess something up. Your email account got banned by some capricious bot, or the provider abruptly decided to stop providing the service, and then the service tied to it decided to send you a verification code to the email you don't have access to anymore -- even though you didn't forget your password for either of them. So now you have even more ways to lose all your stuff.
Meanwhile if you were willing to trust some email provider to not screw you and you only needed some way to recover your keys if your computer falls into the sea, you could just email a copy of them to yourself. And then you wouldn't be relying on that provider to have the only means of recovery, because they're still on your device too.
klabb3 7 hours ago [-]
> How do you technically prevent a cloud vendor from changing their pricing?
Through regulating markets to ensure fierce competition - including things like portability, standard APIs, banning egress fees and similar lock in techniques, breaking up infrastructure (DCs and networking) from service providers. In cloud we have 3 vertically integrated mega oligopolies. That’s not a healthy market.
> data portability […] probably requires a technical solution
Yes, formats and APIs are needed for technical reasons, but it already exists (or fairly trivial to implement) and is not provided – sometimes actively obstructed – for business reasons. Imo interop is predominantly bottlenecked by social/business concerns.
solidsnack9000 17 hours ago [-]
This would make cloud vendors kind of like banks. The cloud vendor is holding a kind of property for the user in the user's account. The user would have clearly defined rights to that property, and the legal ability to call this property back to themselves from the account.
This calling back might amount to taking delivery. In a banking context, that is where the user takes delivery of whatever money and other property is in the account. In the cloud vendor case, this would be the user receiving a big Zip file with all the contents of the account.
Taking delivery is not always practical and is also not always desirable. Another option in a financial context is transferring accounts from one vendor to another: this can take the form of wiring money or sometimes involves a specialized transfer process. Transferring the account is probably way more useful for many cloud services.
This leads us to a hard thing about these services, though: portability. Say we delineate a clear property interest for user's in their cloud accounts and we delineate all of their rights. We have some good interests and some good rights; but what does it mean to take delivery of your Facebook friends? What does it mean to transfer your Facebook account from one place to another?
al_borland 1 days ago [-]
Does this really solve the problem? Let's say I'm using a cloud provider for some service I enjoy. They have documents that spell out that if they have to close their doors they will give X months of notice and allow for a data export. Ok, great. Now they decide to shut their doors and honor those agreements. What am I left with? A giant JSON file that is effectively useless unless I decide to write my own app, or some nice stranger does? The thought is there, it's better than nothing, but it's not as good as having a local app that will keep running, potentially for years or decades, after the company shuts their doors or drops support.
GMoromisato 24 hours ago [-]
Data portability is, I think, useful even before the service shuts down. If I'm using some Google cloud-service and I can easily move all my data to a competing service, then there will be competition for my business.
What if cloud platforms were more like brokerage firms? I can move my stocks from UBS to Fidelity by filling out a few forms and everything moves (somewhat) seamlessly.
My data should be the same way. I should be able to move all my data out of Google and move it to Microsoft with a few clicks without losing any documents or even my folder hierarchy. [Disclaimer: Maybe this is possible already and I'm just out of the loop. If so, though, extend to all SaaS vendors and all data.]
al_borland 23 hours ago [-]
This mainly just requires the ability to export, and standard formats. For generic file storage, emails, contacts, calendars, etc, this is largely possible already. Though there are minor incompatibilities based on various implementations or customizations on top of the standard.
The big problem comes into play for new, or more custom types of applications. It takes a while for something to become ubiquitous enough that standard formats are developed to support them.
hodgesrm 1 days ago [-]
> * Data portability guarantees: Vendors must spell out how data gets migrated out, and all formats must be either open or (at minimum) fully documented.
This is not practical for data of any size. Prod migrations to a new database take months or even years if you want things to go smoothly. In a crisis you can do it in weeks but it can be really ugly, That applies even when moving between the same version of open source database, because there's a lot of variation between the cloud services themselves.
The best solution is to have the data in your own environment to begin with and just unplug. It's possible with bring-your-own-cloud management combined with open source.
My company operates a BYOC data product which means I have an economic interest in this approach. On the other hand I've seen it work, so I know it's possible.
GMoromisato 1 days ago [-]
I'd love to know more about BYOC. Does that apply to the raw data (e.g., the database lives inside the enterprise) or the entire application stack (e.g., the enterprise is effectively self-hosting the cloud).
It seems like you'd need the latter to truly be immune to cloud-vendor problems. [But I may not understand how it works.]
hodgesrm 5 hours ago [-]
Here's how we do it for analytic systems: data and software. The software services are open source running on Kubernetes. If you don't like the vendor or the vendor goes away, the existing services keep running. You can also maintain them because the stack is open source.
This is different from what the local-first article is describing, which addresses data for individuals. That's a much harder problem to solve at scale.
samwillis 1 days ago [-]
> This is trying to solve a business problem (I can't trust cloud-providers) with a technical trade-off (avoid centralized architecture).
I don't think that's quite correct. I think the authors fully acknowledge that the business case for local-first is not complexly solved and is a closely related problem. These issues need both a business and technical solution, and the paper proposes a set of characteristics of what a solution could look like.
It's also incorrect to suggest that local-first is an argument for decentralisation - Martin Kleppmann has explicitly stated that he doesn't think decentralised tech solves these issues in a way that could become mass market. He is a proponent of centralised standardised sync engines that enable the ideals of local-first. See his talk from Local-first conf last year: https://youtu.be/NMq0vncHJvU?si=ilsQqIAncq0sBW95
GMoromisato 1 days ago [-]
I'm sure I'm missing a lot, but the paper is proposing CRDTs (Conflict-free Replicated Data Types) as the way to get all seven checkmarks. That is fundamentally a distributed solution, not a centralized one (since you don't need CRDTs if you have a central server).
And while they spend a lot of time on CRDTs as a technical solution, I didn't see any suggestions for business model solutions.
In fact, if we had a business model solution--particularly one where your data is not tied to a specific cloud-vendor--then decentralization would not be needed.
I get that they are trying to solve multiple problems with CDRTs (such a latency and offline support) but in my experience (we did this with Groove in the early 2000s) the trade-offs are too big for average users.
Tech has improved since then, of course, so maybe it will work this time.
WarOnPrivacy 1 days ago [-]
> End-of-life contracts: cloud-vendors should contractually spell out what happens if they can't afford to keep the servers running.
I'm trying to imagine how this would be enforced when a company shutters and it's principals walk away.
necovek 12 hours ago [-]
Putting stuff in escrow is usually the way to go: escrow service is paid upfront (say, always for the next 3 months), and that's the time you've got to pull out your data.
My company does that with a few small vendors we've got for the source code we depend on.
GMoromisato 1 days ago [-]
It's a good question--I am not a lawyer.
But that's the point of contracts, right? When a company shuts down, the contracts become part of the liabilities. E.g., if the contract says "you must pay each customer $1000 if we shut down" then the customers become creditors in a bankruptcy proceeding. It doesn't guarantee that they get all (or any) money, but their interests are negotiated by the bankruptcy judge.
Similarly, I can imagine a contract that says, "if the company shuts down, all our software becomes open source." Again, this would be managed by a bankruptcy judge who would mandate a release instead of allowing the creditors to gain the IP.
Another possibility is for the company to create a legal trust that is funded to keep the servers running (at a minimal level) for some specified amount of time.
bigfatkitten 20 hours ago [-]
No, not at all.
The entire point of Chapter 11 (and similar bankruptcy legislation internationally) is to allow companies to get out of contracts, so that they can restructure the business to hopefully continue on as a going concern.
WarOnPrivacy 1 days ago [-]
> When a company shuts down, the contracts become part of the liabilities.
The asset in the contract is their customer's data; it is becoming stale by the minute. It could be residing in debtor-owned hardware and/or in data centers that are no longer getting their bills paid.
It takes time to get a trustee assigned and I think we need an immediate response - like same day. (NAL but prep'd 7s & 13s)
WarOnPrivacy 1 days ago [-]
(cont. thinking...) One possibility. A 3rd party manages a continually updating data escrow. It'd add some expense and complexity to the going concern.
prmoustache 1 days ago [-]
> Personally, I disagree with this approach. This is trying to solve a business problem (I can't trust cloud-providers)
It is not only a business problem. I stay away from cloud based services not only because of subscription model, but also because I want my data to be safe.
When you send data to a cloud service, and that data is not encrypted locally before being sent to the cloud (a rare feature), it is not a question of if but when that data will be pwned.
bigfatkitten 20 hours ago [-]
I have spent the last decade or so working in digital forensics and incident response for a series of well-known SaaS companies.
The experience has made me a big fan of self hosting.
HappMacDonald 20 hours ago [-]
"Trust about whether or not another company will maintain confidentiality" still sounds like a business problem to me (or at least one valid way of perceiving the problem)
And the biggest advantage I see of this perspective over the "technical problem" perspective is that assigning responsibility completely covers the problem space, while "hope that some clever math formula can magic the problem away" does not.
necovek 12 hours ago [-]
Here at HN, I think most people see it differently (me included): having clear math proof of "confidentiality" is usually seen as both cheaper and more trustworthy.
Yes, there might be a breakthrough or a bug in encryption, and jnless you've been targetted, you can respond. But we've seen and experienced breakdowns in human character (employees spying on customers, stealing data...), government policies and company behaviour to trust the complexity and cost (lawyers) of enforcing accountability through policy.
In general, you do need both, but if you've got one, to engineers, technical solution is usually more appealing.
satvikpendem 22 hours ago [-]
> This is trying to solve a business problem (I can't trust cloud-providers)
Not necessarily. I like local-first due to robust syncing via CRDTs, not because I somehow want to avoid cloud providers.
maccard 1 days ago [-]
> Vendors must spell out how data gets migrated out, and all formats must be either open or (at minimum) fully documented.
Anecdotally, I’ve never worked anywhere where the data formats are documented in any way other than a schema in code,
mumbisChungo 1 days ago [-]
A good contract can help you to seek some restitution if wrongdoing is done and you become aware of it and you can prove it. It won't mechanically prevent the wrongdoing from happening.
HappMacDonald 20 hours ago [-]
It can also help to align the incentives of multiple parties to actually care about the same goals.
"Mechanically preventing wrongdoing from happening" can be a bit of a Shangri-La. What Tech can mechanically do is increase the cost of wrongdoing, or temporarily deflect attempts towards easier targets. But that by definition cannot "solve the problem for everyone" as there will always be a lowest hanging fruit remaining somewhere.
What contracts can do is help to reduce the demand for wrongdoing.
Habgdnv 1 days ago [-]
Currently there are laws but not for hosting. Look at the contract of Steam for example or Ubisoft, or anything else - Q: What happens to your game collection if we shut down our servers? A: You own nothing and lose everything, GG!
It is like that we must protect users privacy from greedy websites so we will make the bad ones spell out that they use cookies to spy on users - and the result is what we have now with the banners.
GMoromisato 1 days ago [-]
I agree with you! And your point about cookie banners underlines that we can't just rely on regulation (because companies are so good are subverting or outright lobbying their way out of them).
Just as with the open source movement, there needs to be a business model (and don't forget that OSS is a business model, not a technology) that competes with the old way of doing things.
Getting that new business model to work is the hard part, but we did it once with open source and I think we can do it again with cloud infrastructure. But I don't think local-first is the answer--that's just a dead end because normal users will never go with it.
sirjaz 20 hours ago [-]
I've found people want local software and access. This is a major reason why people like mobile more now than desktops outside of the obvious of having it in their pocket. A mobile app gives you more of a private feel than going to website and entering your info. In addition to an extent it is kept local first, due to sync issues.
__MatrixMan__ 20 hours ago [-]
Is it trying to solve a business problem? I think it's trying to solve a more general problem which has nothing to do with business.
It's ok to just solve the problem and let the businesses fail. Predation is healthy for the herd. Capitalism finds a way, we don't have to protect it.
montereynack 1 days ago [-]
Cool to see principles behind this, although I think it’s definitely geared towards the consumer space. Shameless self plug, but related: we’re doing this for industrial assets/industrial data currently (www.sentineldevices.com), where the entire training, analysis and decision-making process happens on customer equipment. We don’t even have any servers they can send data to, our model is explicitly geared on everything happening on-device (so the network principle the article discussed I found really interesting). This is to support use cases in SCADA/industrial automation where you just can’t bring data to the outside world. There’s imo a huge customer base and set of use cases that are just casually ignored by data/AI companies because actually providing a service where the customer/user is is too hard, and they’d prefer to have the data come to them while keeping vendor lock-in. The funny part is, in discussions with customers we actually have to lean in and be very clear on “no this is local, there’s no external connectivity” piece, because they really don’t hear that anywhere and sometimes we have to walk them through it step by step to help them understand that everything is happening locally. It also tends to break the brains of software vendors. I hope local-first software starts taking hold more in the consumer space so we can see people start getting used to it in the industrial space.
spauldo 1 days ago [-]
It doesn't help that all the SCADA vendors are jumping on the cloud wagon and trying to push us all in that direction. "Run your factory from your smartphone!" Great, now I'm one zero-day away from some script kiddie playing around with my pumps.
codybontecou 1 days ago [-]
An exciting space and I'm glad you and your team are working in it.
I looked over your careers page and see all of your positions are non-remote.
Is this because of limitations of working on local-first software require you to be in-person? Or is this primarily a management issue?
monkeyelite 21 hours ago [-]
There is no reason for every application to have its own sync platform. I suspect this framing came out of mobile apps where there is no composability or modularity between programs.
If you really embrace "local first" just use the file system, and the user can choose from many solutions like git, box, etc.
I hate signing up for your sync just as much as any other SAAS, but it's even more opaque and likely to break.
swsieber 20 hours ago [-]
I agree that not every app needs it's own sync engine, but I disagree with your framing that the file system is the universal way to embrace local first. I have two reasons.
First is that yeah, local first, but I also want concurrency. If it's just local first, you're right, any old sync will do. But I want more than that. I want to not have to think (a la dropbox, being slick). I want my wife and I to be able to make separate edits on our phones when we're in a dead zone.
Second is that sync works a lot better when it has deep knowledge of the data structure and semantics. Git and box both have significant shortcomings, but both exacerbated by the concurrency desire.
monkeyelite 20 hours ago [-]
But this problem isn't going to be solved by every app making its own sync system. Even if there is a magic library you can adopt that does pretty good, then everyone having their own completely independent hosting solution and sync schedule.
If files are insufficient, what data-structure would make modular sync possible for multiple applications in an OS?
And I’m not suggesting one doesn’t exist, I’m challenging to present a comprehensive solution, that probably involved operating systems.
> I want my wife and I to be able to make separate edits on our phones when we're in a dead zone.
Files do this.
necovek 13 hours ago [-]
I agree that files solve some rudimentary cases, but they do not even allow simple conflict resolution. Eg. compressed files, including container formats like OpenOffice (text files in a ZIP archive IIRC), might be simple to apply changes from two sides if they are in distant parts, but syncing full files simply barfs.
Note that this does not even need two users: I hit this problem with a desktop and laptop and self-hosted NextCloud myself.
In general, a filesystem that actually stored both raw data (to fail-over to), but also a per-format event log, and maybe even app specific events (imagine a PNG changes, we could have any change recorded as raw bytes, generic bitmap image operation like "modify pixels at x,y to ..." and app-specific log like "gimp: apply sharpen filter on polygon area ...").
This would allow the other side to attempt to do the smartest sync it has (if it has a compatible version of gimp, it could decide to apply the filter, otherwise fall back to raw pixel changes if no conflicts, and then fall back to full file contents reconciliation).
Just like MIME handlers get registered, if file systems provided such change logs, some could have very advanced sync systems with this support from "filesystems".
monkeyelite 5 hours ago [-]
The log is just a block of data. All the burden to use the log is on the application so the OS is providing very little general functionality.
I’m also suspect of logs as a general form of conflict resolution as you are just hoping the two edits don’t touch the same area. And if they do then you are left in an invalid state.
You brought up zips. Pile of files seems like a way you can divide up data so it can have more pieces that are mergable/diffable.
For example “the folder can contain N files” or “there must be exactly 1 of this file”.
necovek 4 hours ago [-]
Log is certainly a blob of data, but the point is that it should be more granular, with clearer delineation of what are and what aren't conflicting changes: there will always be conflicting changes where no automation can really help.
For zip and other container-type files, you'd have log entries to the tune of "changed contained file foo.png: ...".
Operating systems would need to support some basic operations: container file operations like zip files, basic bitmap image editing, basic text document diffing, structured text diffing (XML, JSON, YAML...), etc...
Applications would provide OS-registered services (like MIME handlers are registered today) that can interpret and produce even more semantic events on top of the existing ones.
The environment could offer an interface during "syncing" when it detects a conflict to resolve it using one of the generic (or not) conflict resolution mechanisms (use local or remote version completely; use incremental delta if there is some generic semantic diff; app-provided capability if present on both sides).
Now, you are right that this can be implemented with this log being a regular file next to the file itself and completely user-space, but you will hit issues if you are not able to tie it nicely to things like fwrite/fsync and similar syscalls.
Obviously, for it to make sense, it needs to be widely accepted as the approach, which is what the local-first movement is trying to achieve with CRDTs.
swsieber 19 hours ago [-]
Files come with certain restrictions, which don't matter for certain types of applications. But for others they do.
I think it boils down to provenance and concurrency. If we edit the same line a file, that's ba merge conflict when it really should be simple and something I shouldn't have to bother with. And when we do do the same line edit, I'd love to have provenance on that data.
Granted, those aren't local first thing exactly, but I think there will be apps that want all of that.
hahn-kev 8 hours ago [-]
If the app is designed for it you can use a hybrid approach, where a given "document" is stored in 1 file for each client, and the client merges the changes across all files. That way there's never a change conflict that something like Dropbox needs to handle and it can all be offloaded to the app.
hidelooktropic 21 hours ago [-]
I mostly agree with this, but sometimes it's not that simple in practice. I created an app that did exactly this and it resulted in inevitable file conflicts because I couldn't negotiate between the clients when a file should be allowed for editing.
It’s based on NixOS to provide as much as possible out of the box and declaratively: https, SSO, LDAP, backups, ZFS w/ snapshots, etc.
It’s a competitor to cloud hosting because it packages Vaultwarden and Nextcloud to store most of your data. It does provide more services than that though, home assistant for example.
It’s a competitor to YUNoHost but IMO better (or aims to be) because you can use the building blocks provided by SelfHostBlocks to self-host any packages you want. It’s more of a library than a framework.
It’s a competitor to NAS but better because everything is open source.
It still requires the user to be technical but I’m working on removing that caveat. One of my goals is to allow to install it on your hardware without needing nix or touching the command line.
pastaheld 1 days ago [-]
Love it! I've been thinking about this a lot lately. It's crazy how many great FOSS alternatives are out there to everything – and while they might be relatively easy to install for tech-people ("docker compose up"), they are still out of reach for non-tech people.
Also, so many of these selfhostable apps are web applications with a db, server and frontend, but for a lot of use cases (at least for me personally) you just use it on one machine and don't even need a "hosted" version or any kind of sync to another device. A completely local desktop program would suffice. For example I do personal accounting once a month on my computer – no need to have a web app running 24/7 somewhere else. I want to turn on the program, do my work, and then turn it off. While I can achieve that easily as a developer, most of the people can't.
There seems to be a huge misalignment (for lack of a better word) between the amount of high-quality selfhostable FOSS alternatives and the amount of people that can actually use them. I think we need more projects like yours, where the goal is to close that gap.
I will definitely try to use selfhostblocks for a few things and try to contribute, keep it up!
ibizaman 22 hours ago [-]
My guess as to why most apps are now a web UI on top of a DB is because it’s easy to “install”. SelfHostBlocks is admittedly geared towards a central server serving web apps. Or at least apps with a desktop or mobile component but geared towards synching to a central server.
Feel free to give it a try though, I’d love that! Also feel free to join the matrix channel UF you have any questions or just to get some updates.
pastaheld 12 hours ago [-]
> My guess as to why most apps are now a web UI on top of a DB is because it’s easy to “install”.
That plus web dev is trendy and everybody is learning it. I wouldn't know how to code a proper desktop app right now, I've not done it in years.
I don't want to criticize that or the centralization aspect – there will still be ways to put these centralized things on a PC for example.
ibizaman 7 hours ago [-]
Yes I agree I wouldn’t know how to write a desktop app either.
virgoerns 1 days ago [-]
I love that you include hledger! It's amazing piece of software, even if a little obscure for people unfamiliar with plaintext accounting!
ibizaman 22 hours ago [-]
I love that application. I plan to make some improvements to the web UI. I’d love to have multiple tabs with saved reports. That would allow my spouse to use it quite easily. I’ll be adding that at some point.
voat 1 days ago [-]
Looks really neat! Thanks for building this
ibizaman 22 hours ago [-]
Thank you for the kind words :)
davepeck 1 days ago [-]
In theory, I love the local-first mode of building. It aligns well with “small tech” philosophy where privacy and data ownership are fundamental.
In practice, it’s hard! You’re effectively responsible for building a sync engine, handling conflict resolution, managing schema migration, etc.
This said, tools for local-first software development seem to have improved in the past couple years. I keep my eye on jazz.tools, electric-sql, and Rocicorp’s Zero. Are there others?
rzzzt 1 days ago [-]
CouchDB on the server and PouchDB on the client was an attempt at making such an environment:
Also some more pondering on local-first application development from a "few" (~10) years back can be found here: https://unhosted.org/
jkestner 17 hours ago [-]
Using Couch/Pouch on our current app for this reason. Great to work with. Though we’re not supporting offline-first right away (depends on external services), it’s going to help with resilience and a future escape valve.
EDIT: actually I wanted to point to the "landscape" link (in the top menu) but that URL is quite unergonomic.
davepeck 1 days ago [-]
No, I didn't know about it -- thank you! (EDIT: and the landscape page has lots of libraries I hadn't run across before. Neat.)
jessmartin 23 hours ago [-]
One of the authors of the Landscape here. Glad you found it helpful!
swsieber 20 hours ago [-]
I've been using instantdb in anger for the past month or so for a side project of mine. I'm building a personal budget app.
I should probably write a blog post, but I will say that I investigated power sync, electricSQL, livestore and powersync before. I briefly looked at jazz tools but wanted something a bit more structured.
I'm pretty impressed this far. I've actually been writing it with Vue and a community library. Permissions were a bit tricky, but once I figured it out it was simple. And I like their magic email login. And I like their dashboard/reply, but there are a few big changes I would make there to make it less fiddly.
I love that it's open source, and that if I want to, I could self host it.
As for the other options:
- jazz wasn't structured enough
- livestore came off as too fiddly with the event store, but it was appealing. That the dev tools are payealled was disappointing, but understandable
- electriSQL really only provided half a solution (read, not the write model
- couchDB / pouchDB wasn't structured enough for me, and I wanted better cross document support than was obvious / baked in.
- did not investigate zero really
mikegin 39 minutes ago [-]
+1 for instant! Been using it and I find it a breeze to work with, definitely filling the exact niche this article was discussing. sync engines are the future!
stopachka 3 hours ago [-]
[Instant founder]
Brightened reading this. If you have any feedback please let us know! We on the discord, and answer over on founders@instantdb
zdragnar 1 days ago [-]
I think I saw someone point out automerge not long ago:
Rust and JavaScript implementations, a handful of network strategies. It doesn't come with the free or paid offering that jazz.tools does, but it's pretty nice.
satvikpendem 22 hours ago [-]
I like https://loro.dev personally, also in Rust and JS. Many such CRDTs are being built in Rust these days.
samwillis 1 days ago [-]
Along with the others mentioned, it's worth highlighting Yjs. It's an incredible CRDT toolkit that enables many of the realtime and async collaborative editing experience you want from local-first software.
I’ve built several apps on yjs and highly recommend it. My only complaint is that storing user data as a CRDT isn’t great for being able to inspect or query the user data server-side (or outside the application). You have to load all the user’s data into memory via the yjs library before you can work with any part of it. There are major benefits to CRDTs but I don’t think this trade-off is worth it for all projects.
3036e4 1 days ago [-]
I use local software and sync files using git or sometimes fossil (both work fine in Android with termux for instance, for stuff In want to access on my phone). I don't host servers or use any special software that requires syncing data in special ways.
jonotime 22 hours ago [-]
There are a bunch and quite a breadth of different solutions/takes on the problem.
I feel like local-first or offline-first can be seen as something catered to niche users when it's brought up in front of strategy or planning folks — they imagine most people having good, reliable internet all the time. The truth is always more frustrating to account for. It is extremely frustrating to see the falloff in functionality in apps when internet is spotty or slow, which happens a lot. Try doing anything in most apps on the subway in New York (where there isn't service in most tunnels) and you'll feel the pain. Or, try doing anything in a crowd and the cell towers are saturated. Fastmail's client won't show you emails without internet, Balatro hangs while it looks for a cloud save, the list goes on and on.
hodgesrm 4 hours ago [-]
> Local-first apps, on the other hand, have better privacy and security built in at the core.
I love this article, but the section on security raised a lot of questions. What's the model for authorizing access to documents for collaboration? How do you managed keys safely for encrypted data? How do users recover "lost" keys?
Cloud computing models have a lot of security mechanisms built-in. You might not like the model (AWS IAM for example) but at least there's a foundation already in place.
nanomonkey 3 hours ago [-]
Shamir's Secret sharing allows one to store secret keys within m of n number of your peer group (where m is less than n), thus you can hand out an encrypted secret with 5 of your friends and reproduce the secret by getting 3 of them to respond back to you with the secret at a later date. None of the peers will be able to reproduce the secret by themselves.
There are other options for key storage, revoking group privileges, etc. It's an extensive topic, but the foundation is there, it just depends on your network and use cases.
dtkav 1 days ago [-]
We need a term for a viable business model to pair with local-first tech.
I've been working on Relay [0] (realtime multiplayer for Obsidian) and we're trying to follow tailscale's approach by separating out the compute/document sync from our auth control plane.
This means thats users still subscribe to our service (and help fund development) and do authn/authz through our service, but we can keep their data entirely private (we can't access it).
Relay user here! It’s great. Quite reliable for an early product.
dtkav 23 hours ago [-]
Thanks for the kind words
trinsic2 22 hours ago [-]
Are you requiring a google account for file/folder based auth on a per user bases for a vault? Not to keen on using a 3rd party for this kind of thing.
dtkav 15 hours ago [-]
For our free/individual plan we do use OAuth2 providers (currently only Google is enabled, but considering others), and can support other methods for larger teams (like oidc).
Originally the idea was to keep everything within the Obsidian UI so things like username/password didn't make sense (no password managers in Obsidian).
We initiate the OAuth2 login flow from within Obsidian. I guess we could add an extra click that takes you to our website first and then add support more auth methods from there. I don't really want it to feel like a web app though.
I'd love to hear your take. Which login method do you think is both simple and could be coherently used within Obsidian on all platforms?
hemant6488 1 days ago [-]
I've been building exactly this with SoundLeaf [0] - an iOS client for the excellent open-source Audiobookshelf server. No data collection, no third-party servers, just your audiobooks syncing directly with your own instance.
The user-friendliness challenge is real though. Setting up Audiobookshelf [1] is more work than "just sign up," but once you have it running, the local-first client becomes much cleaner to build. No user accounts, no subscription billing, no scaling concerns.
Simple pricing too: buy once, own forever. No monthly fees to access your own audiobooks.
The primary challenge with building local first software is the sync layer. The current 3rd party offerings are not mature. And people have been working on these for a few years. Electric SQL comes to mind.
owebmaster 17 hours ago [-]
As a local-first developer, I'd say the biggest challenge is p2p. Or more specifically, NAT traversal and the need of a TURN server.
mazzystar 17 hours ago [-]
This reminds me of my own painful story: I once made a local photo search app called Queryable that ported OpenAI's CLIP model to iPhone, letting you search your photos with queries like "a black cat sitting on a sofa."
Since it needed to access users' local photo libraries, I didn't want the app to connect to the internet under any circumstances. So I made it a paid app instead of the usual free+in-app purchases model, since the latter requires calling StoreKit which goes online. But because the app had to run the CLIP model, it would crash on lower-performance phones like the iPhone X. Users who paid for it couldn't use it and felt scammed, leading to tons of one-star reviews and angry complaints about their photos being stolen. Eventually I decided to open-source the app, though it never brought me much revenue anyway.
Two years later, Apple started announcing they'd be integrating this exact feature into Apple Intelligence : )
replwoacause 16 hours ago [-]
Couldn’t you have just restricted the app to being installable on only certain iPhone models?
mazzystar 10 hours ago [-]
Apple doesn't allow developers to target specific device models, presumably to prevent discrimination. However, you have two options: 1. Set a minimum iOS version requirement, or 2. Restrict to devices with A12 chips or later. But neither approach can exclude certain problematic device models.
ciju 12 hours ago [-]
We have been building a local-first browser app (PWA) for personal finance, based on double-entry accounting. https://finbodhi.com/
We do use online services like firebase for auth, and some service to fetch commodity prices etc, but rest of the data is stored in browser storage (sqlite) and backed to local disk (and soon dropbox). We also syncs data across devices, always encrypting data in transit.
I think it's the way to go, for most personal data applications.
mirkodrummer 4 hours ago [-]
Try demo doesn't work on my iphone, it keeps spinning forever. Plus please take into consideration of removing the friction via signup, if it's real local first you don't need accounts in the cloud
bhauer 1 days ago [-]
I've been wanting a computing model I call PAO [1] for a long time. PAO would run personal application "servers" and connect dynamic clients across all devices. PAO is centralized, but centralized per user, and operating at their discretion. It avoids synchronization, complex concurrent data structures, and many other problems associated with alternatives. Its weakness is a need for always-on networks, but that complication seems ever easier to accept as omnipresent networks become realistic.
I recently started using Typst instead of Pandoc->LaTeX.
I held off on playing with Typst for years because I was under the (incorrect) impression that the only way to use it was with their web editor. I'm sure that their editor is completely fine, but I am pretty entrenched in Neovim and Pandoc had been serving me well.
Once I found out that Typst has a command line version that I can use directly, it became more appealing, because I'm pretty sick of cloud shit.
2color 1 days ago [-]
It's a very exciting moment for this movement. A lot of the research and tech for local-first is nearing the point that it's mature, efficient, and packaged into well designed APIs.
Moreover, local-first —at least in theory— enables less infrastructure, which could reignite new indie open source software with less vendor lock-in.
However, despite all my excitement about embracing these ideas in the pursuit of better software, there's one hurdle that preventing more wide spread adoption amongst developers, and that is the Web platform.
The Web platform lacks building blocks for distributing hashed and/or signed software that isn't tied to origins. In other words, it's hard to decouple web-apps from the same-origin model which requires you set up a domain and serve requests dynamically.
Service Workers and PWAs do help a bit in terms of building offline experiences, but if you want users to download once, and upgrade when they want (and internet is available), you can't use the Web. So you end up breaking out of the browser, and start using Web technologies outside of the browser with better OS functionality, like Electron, React Native, Tauri et al (the https://userandagents.com/ community is doing some cool experiments in this space).
sirjaz 20 hours ago [-]
We need to get back to apps rather than webapps. The hardware compatibility issues of the past are basically no longer here, and there are three major OS types two of which can use each other's apps.
HappMacDonald 19 hours ago [-]
Perhaps, but then how will they be authored? In what language and with what GUI toolkit?
I view everyone flocking around Electron as proof of a failure on this front.
owebmaster 17 hours ago [-]
Pretty much the opposite. Local-first makes web apps feel just like apps, without the native-apps security risks.
chrisweekly 1 days ago [-]
> "we have gone further than other projects down the path towards production-ready local-first applications based on CRDTs"
This seems like a bold claim, but IMHO Ink & Switch have earned their solid reputation and it wouldn't surprise me if it's true. I agree w/ their analysis and am philosophically aligned w/ their user-centric worldview. So who's going to build "Firebase for CRDTs"?
packetlost 1 days ago [-]
> Firebase for CRDTs
Do you actually need anything special for CRDTs over a normal database? My understanding is the actual CRDT part is done "client side"
chrisweekly 1 days ago [-]
I was just referring to the posted article's assertion that "Firebase for CRDTs" is a huge opportunity. I think I agree w the authors that a well-architected CRDT solution for local-first apps requires capabilities not currently provided by Firebase or any other vendor. But I'm no expert.
Incipient 1 days ago [-]
The data part aside, and specifically on the platform/functionality side - these cloud/large products unfortunately do offer more powerful/advanced features, or convenience. Be it cloud multi-device functionality that makes moving around and collaborating seamless, or to enterprise products like snowflake and fabric that offers all sorts over a standard mssql db.
I'm personally very against vendor lock in, but there is some value to them.
coffeecoders 1 days ago [-]
Lately, I have been following this approach and going towards local-first software. I like simple softwares with barebone features.
- Search: Whoogle? is kinda dead. Need alternative.
puttycat 7 hours ago [-]
For passwords: Enpass is also a wonderful local-first password manager (with optional LAN or cloud sync options)
drpixie 19 hours ago [-]
Remember when the justification for cloud was "Your work is not trapped on one device". Well, turns out your cloud data is trapped on one device, AND it's not under your control.
kristianc 24 hours ago [-]
The old model—a one-time purchase, local install, full user control—worked because devs could sell boxed software at scale. Now, that model collapses unless someone’s willing to either Undervalue their own labour, or treat the software like a public good, absorbing the long tail of maintenance with no recurring income.
The article posits it as though subscription software is something which has been sneaked in on us. But users today expect things like instant updates, sync across devices, collaboration, and constant bug fixes and patches - none of which come easily if you're only willing to pay for the system once.
OjotCewIo 23 hours ago [-]
> as though subscription software is something which has been sneaked in on us
Oh but it has (IMO).
> users today expect things like instant updates [...] constant bug fixes and patches
Nah, this is in reverse. With boxed software, the developer had to deliver an essentially bug-free product. Now, with easy updates technically possible, the developers have gone complacent, and deliver shit. That is why users expect bugfixes instantly. (And any enlightened user abhors unrequested features, as there are no features without regressions, and who wants regressions in any serious application?) The only tolerable online updates are security fixes.
> sync across devices, collaboration
This is a valid expectation, but its execution has been a train-wreck. Research, design and implementation should start with end-to-end encryption; the network architecture should be peer-to-peer (mesh, not centralized). What do we get instead? More centralization of control than ever, and less privacy and ownership than ever.
kristianc 23 hours ago [-]
Generally that's not how I remember it - third party software on the Mac at least got some kind of a beach-head because Windows software was full of bugs, crashes, corrupted files, drivers that never worked, and patch CDs mailed to enterprise customers like they were firmware apologies. Own your own software, taken to its logical endpoint, was a shareware nightmare.
threetonesun 22 hours ago [-]
The old model of boxed updates is still in use by some companies today, JetBrains comes to mind. In either case you tuck major new features in a new major version or rolling yearly releases and sell the customer a license to the software that gets a year of updates. In a similar vein many apps I use on my Mac have monthly subscriptions but cancelling them limits their use to essentially one device, but doesn't remove the application or my access to the data.
flkenosad 23 hours ago [-]
> treat the software like a public good, absorbing the long tail of maintenance with no recurring income.
Good point. Governments would do this if they really worked "for the people"
pcollins123 10 hours ago [-]
100% agree! I built Paisley (because it is the opposite of plaid), to host your personal finances locally and is 100% open source. Paisley pulls data from your financial institutions by scraping balances and importing CSV exports, storing everything locally in a simple SQLite database.
What are the top web local first frameworks worth checking out these days? i’ve heard of livestore, tanstack DB with electric, zero. any others that are easy to use and flexible? use case is multiplayer apps and maybe games. thanks!
goodthink 18 hours ago [-]
multisynq.io
Formerly croquet.io. A team Alan Kay project. Dead simple. Synchronized execution, synchronized data comes along for free.
sygned 1 days ago [-]
I've made a local first, end-to-end encrypted, auto sync bookmark extension that doesn't milk your data in any way. It's 100% private, I even don't use Google analytics on my website. Some of the reasons why I've put some work into this is:
- because I could not find something similar that doesn't milk and own my data
- to never lose a bookmark again
- to have my bookmark data encrypted in the cloud
- to have private history
- to have some extra time saving features in the extension that are for unknown reason rare to find
- more learning and experience (it's acutally quite complex to build this)
After about 4 years of using it daily on every pc I own, I found out it's a pain for me and my family when it is not installed on a browser. I thought; if it's useful for us, it might be useful for others too! So, I decided to make it available by subscription for a small fee to cover the server and other costs. I'm not really into marketing, so almost no one knows it exists. You can find it on markbook.io.
neilv 20 hours ago [-]
Skimming the article, it seems to touch on a lot of the right points, but the motivating first paragraph seems weak:
> Cloud apps like Google Docs and Trello are popular because they enable real-time collaboration with colleagues, and they make it easy for us to access our work from all of our devices. However, by centralizing data storage on servers, cloud apps also take away ownership and agency from users. If a service shuts down, the software stops functioning, and data created with that software is lost.
"Apple pie might be tasty and nutritious and exactly what you want, but, theoretically, apple pie could burst into flames someday, and take your favorite pie-eating bib with it.
arendtio 23 hours ago [-]
Regarding the no-spinners: I think it is the wrong approach to argue that just because you have data locally, you don't need any spinners.
Whether you need a spinner or not should be decided by the User Experience (e.g., when the user has to wait for more than 100ms, show a spinner), and not by the location of the data. I am a big fan of local-first apps and enjoy building them myself. However, sometimes your app takes a moment to load. With local-first, you eliminate the network as a source of delays, but there are other factors as well, such as large data sets or complex algorithms.
For example, when you have a project planning software and want to plan 100 work packages with multiple resource combinations in an optimal way, depending on the algorithm, this can take some time. In that case, a spinner or a progress bar is a good thing.
d1sxeyes 22 hours ago [-]
Agreed. No loading spinners is a good goal, but processing spinners might be unavoidable.
samtho 22 hours ago [-]
I didn’t get the impression that the author is advocating for removing spinners as a UI concept, rather it’s just being used a shorthand for, “you should not need to send and load the data to and from elsewhere while you are working.”
arendtio 13 hours ago [-]
Agreed, my comment was meant to provoke exactly that conclusion ;-)
JFingleton 22 hours ago [-]
A properly designed app would leverage multi threading to place any long running jobs in the background, allowing the user to carry on with other tasks.
Spinners should not exist in a local first app.
arendtio 13 hours ago [-]
You are aware that 'local-first' does not mean 'no-network'. Having a sync mechanism that runs in the background without user notification can be quite disconcerting.
I mean, I did it, I built an app with a transparent background sync. Then I added a special page, 'sync center'.
In reality, mobile devices don't always have perfect network connections. Therefore, when the user is unsure whether the device is in sync or if the sync is in progress but encounters an obstacle, they might perceive the app as unreliable.
Banning spinners is dogmatic, not user-centric.
alun 12 hours ago [-]
I love this idea of local-first software, but from a business point of view there's unfortunately no current incentive to adopt it since it's nowhere near as profitable compared to SaaS. That, in my opinion, is the biggest bottleneck right now to this getting worldwide adoption
sunshine-o 24 hours ago [-]
Most of that stuff was very much over engineered in the last two decades.
The backend for my personal notes, tasks, bookmarks, calendar and feeds are files in directories synced with Syncthing across devices.
I ended there after going from one app to another and being tired of all this.
It is self hosted with no server backend (beyond a Syncthing on a NAS or VPS, optional). It is very reliable and works without Internet connection.
I could have put everything in sqlite too and sync it one way or another, but it seemed already too complicated for my requirements.
I can't share it beyond my close relatives but I had the same problem with people using Google or Microsoft before.
kristianp 10 hours ago [-]
People's personal computers, even their tablets and phones are so powerful, they can fulfill most use cases (except AI), especially if the application is reasonably efficient.
neon_me 1 days ago [-]
100%! Not only local-first. But also private, zero/minimal dependency, open source and environment agnostic!
If there is anyone interested in working on such projects - let's talk! We can't leave our future to greedy surveillance zealots.
danjl 1 days ago [-]
Goal #2, your data is not trapped in a single device is the hard bit, especially with goal #3, the network is optional. For #2 to be true, this means the network is *not* optional for the developer, it is required. Thus the entire complexity of building a distributed app, especially one without a centralized server, which is particularly difficult even with modern local first database tools, greatly increases the complexity of writing this type of software compared to either traditional desktop apps or cloud apps.
1 days ago [-]
Existenceblinks 1 days ago [-]
Tried to adopt this last month at work, it failed.
E.g. the mentioned Automerge, it has poor docs https://automerge.org/docs/reference/library_initialization/... and that left out a lot of question, it seems backend agnostic but have to figure out how to store, how to broadcast ourselves.
jes5199 23 hours ago [-]
yeah I tried to build a project on Automerge but I ended up switching to Yjs, it seems more mature.
gerdesj 20 hours ago [-]
Nextcloud with a few addons - all open source - gets you feature parity with all of that lot.
NC itself gets you file sync and webdav etc. An add on gets you the webby version of LibreOffice. You can bolt on AI addons to classify and tag your images/photos and with a bit more effort, your docs too.
It's properly local first.
thenthenthen 1 days ago [-]
Didn't this already happen? The internet died 20 years ago. Now it is just ‘somewhat’ interconnected intranets with their own local legislation?
jonotime 22 hours ago [-]
Awesome to see this getting more coverage. I am very interested in local first and I am working on several progressive web apps based around this. One app depends on file sync, not database sync and the best I have found is remoteStorage.js. Its not perfect, but its very much the missing piece I was often looking for.
1 days ago [-]
goodthink 17 hours ago [-]
Synchronize execution. Not data.
https://multisynq.io
Synchronization of the data is implicit.
NO centralized anything.
miladyincontrol 1 days ago [-]
In a world of owning nothing and paying subscriptions for everything, owning your data and using software that is either yours or libre is 'rebellion' to many a service provider.
Its not local-first or some sort of cloud diet trend, it should be the norm.
OjotCewIo 24 hours ago [-]
Right. I don't even understand why this article had to be this verbose. It's not like we need to be "convinced" that local is better. Everybody who values privacy and independence knows already. But this stuff is unimplementable -- we suffer from the cloud disease because it's immensely profitable for the cloud providers and cloud-based app providers to enslave us, and to bleed us out. Their whole point is locking us in.
"Sharing models" are totally irrelevant until the concept of self-determination is tolerated by the powerful (and they will never tolerate it). Accessing my data from multiple devices is totally secondary; I don't trust mobile endpoints to access my remote data in the first place.
ChiMan 18 hours ago [-]
The speed alone is sufficient for a local-first approach. The latency of any cloud software I’ve ever used is like constant sand in the gears of thinking. Although taking supplements that slow my thinking—essentially natural downers—do improve my experience with such software, the improved experience comes at the expense of IQ. Basically, you need to be a little slow and dumb for the software to work as intended.
This is nuts. Computers are supposed to enhance and enable thinking, not make you stupid. In this sense, cloud software is possibly the biggest fraud ever perpetrated on the paying, computer-using public.
For the love of God, please bring back my late 1990s and early 2000s brain-boosting computer experience.
alganet 24 hours ago [-]
Offline-first, now with CRDTs, and a brand new name!
bilsbie 19 hours ago [-]
One compromise could be to host the software but also offer the option for self hosting.
owebmaster 17 hours ago [-]
Local-first apps should not need hosting.
bilsbie 7 hours ago [-]
I mean if local first isn’t possible. I’m more comfortable with web based development plus don’t you need to apply for all kinds of certificates to be “allowed” to run on windows and Mac these days?
owebmaster 6 hours ago [-]
> don’t you need to apply for all kinds of certificates to be “allowed” to run on windows and Mac these days?
Not for a PWA.
rossant 23 hours ago [-]
That was published 6 years ago. What's the state of the art of local-first software technology in 2025?
jumploops 1 days ago [-]
One thing I’m personally excited about is the democratization of software via LLMs.
Unfortunately, if you go to ChatGPT and ask it to build a website/app, it immediately points the unknowing user towards a bunch of cloud-based tools like Fly.io, Firebase, Supabase, etc.
Getting a user to install a local DB and a service to run their app (god forbid, updating said service), is a challenge that’s complex, even for developers (hence the prevalence of containers).
It will take some time (i.e. pre-training runs), but this is a future I believe is worth fighting for.
satvikpendem 22 hours ago [-]
> Unfortunately, if you go to ChatGPT and ask it to build a website/app, it immediately points the unknowing user towards a bunch of cloud-based tools like Fly.io, Firebase, Supabase, etc.
Not sure where your experience is coming from but when I asked an LLM, Claude to be more precise, it referred me to local options first, such as SQLite. It didn't consider cloud platforms at all until I had asked, presumably because it can understand local code and data (it can query it directly and get back results) but cannot understand the context of what's in the cloud unless you configure it properly and give it the env variables to query said data.
jumploops 21 hours ago [-]
What was your prompt?
In my experience it’s great at utilizing local storage and SQLite, if you ask it to.
I just asked the ChatGPT web client (4o, as that’s what most non-developers might default to):
> Can you build me a website for my photos
And it immediately started suggesting Wordpress, Wix, Squarespace, etc.
Specifically, this was section 4 of the various questions it asked me:
> 4. Tech Preference (optional)
> - Do you want this as a static HTML site, WordPress, or built with something like React, Next.js, or Wix/Squarespace?
> - Do you need help hosting it (e.g., using Netlify, Vercel, or shared hosting)?
As a non-programmer, I likely wouldn’t understand half those words, and the section is marked optional.
If I follow the “default path” I’m quickly forking over a credit card and uploading my pictures of dogs/family/rocks to the cloud.
moffkalast 1 days ago [-]
Local LLMs are even more amazing in concept, all of the world's knowledge and someone to guide you through learning it without needing anything but electricity (and a hilariously expensive inference rig) to run it.
I would be surprised if in a decade we won't have local models that are an order of magnitude better than current cloud offerings while being smaller and faster, and affordable ASICs to run them. That'll be the first real challenger to the internet's current position as "the" place for everything. The more the web gets enshittified and commercialized and ad-ridden, the more people will flock to this sort of option.
AIs like GPT being non-local is one of my biggest issues with it.
hkt 1 days ago [-]
Self hosting (which is often adjacent to local-first software) is fine. I've done it for years.
But it is a nightmare when it goes wrong: the conclusion I've reached is that it is out of reach to regular people who don't want the Byzantine support load that could accompany something going wrong. They want turnkey. They want simple. They aren't interested in operating services, they're interested in using them.
The FLOSS model of self hosting doesn't really offer a reliable way of getting this: most businesses operating this way are undercapitalised and have little hope of ever being any other way. Many are just hobbies. There are a few exceptions, but they're rare and fundamentally the possibility of needing support still exists.
What is needed, imo, is to leverage the power of centralised, professional operations and development, but to govern it democratically. This means cooperatives where users are active participants in governance alongside employees.
I've done a little work towards this myself, in the form of a not-yet-seen-the-light-of-day project.
What I'd love to see is a set of developers and operators actually getting paid for their work and users getting a better deal in terms of cost, service, and privacy, on their own (aggregate) terms. Honestly, I'd love to be one of them.
Does anyone think this has legs to the same extent as local-first or self hosting? Curious to know people's responses.
ibizaman 24 hours ago [-]
This is the business model I want to have: I work on a stack of fully open source software and package them in a turn-key server that you own. You can use it on your own for free if you’re knowledgeable and I offer a subscription where I’m the sysadmin of the box you own and that I built for you. I do the maintenance, the updates, etc. There’s no lock-in because you can stop the subscription anytime or even just pick another sysadmin that would know the stack. The only reason you’d keep me around would be that the service I offer is pretty damn good. Would something like that appeal to you?
mxuribe 1 days ago [-]
I was about to suggest that a better, more open, and fair form of capitalism would need to be used as a tool...but then, re-reading your comment - "...leverage the power of centralised, professional operations and development, but to govern it democratically..." - i think you better encapsulate what i meant to convey. :-)
That being said, yes, i do believe *in the near/upcoming future* local-first, self-hosting and i will add more fair open source vendors will work! Well, at least, i hope so! I say that because Europe's recent desire to pivot away from the big U.s. tech companies, and towards more digital sovereignty - in my opinion - begins the foundational dependency for an ecosystem that will/could sustain self hosting, etc. The more that europe is able to pivot away from big tech, the more possibilty exists for more and varied non-big tech vendors manifest...and the more that Europe adopts open source, the more the possibility that usage and expertise of self-hosting grows....plus, for those who do not know how to, or simply do not wish to manage services themselves...well, in time i think Europe will have fostered a vast array of vendors who can provide such open source, digital services, but get paid a fair cost for providing fair value/services, etc. ...and, by the way, i say this all as a biased person in favor of open source AS WELL AS being an American. :-)
OjotCewIo 24 hours ago [-]
> What is needed, imo, is to leverage the power of centralised, professional operations and development, but to govern it democratically. This means cooperatives where users are active participants in governance alongside employees.
Utopia. Unattainable. Self-determination of the individual has been consistently persecuted under all societal arrangements; communism and capitalism equally hate a citizen that wants to remain independent and self-sufficient.
didgetmaster 1 days ago [-]
Databases like Postgres can be run locally or as part of some kind of managed service in the cloud. Anyone know of recent stats that show the percentage of databases that are managed locally vs by some cloud service?
lutusp 1 days ago [-]
Complete agreement. Here's a brief, practical action plan for Windows users:
* Download all your data from Microsoft's "OneDrive" cloud storage, which if not disabled, is the default storage method in a new Windows install.
* Verify that all your files are now stored locally.
* Click the gear icon, go to "Settings -> "Account" -> "Unlink this PC," right-click, "Unlink account".
* Remove Microsoft's OneDrive app from your system -- full removal is the only way to prevent perpetual harassment and reactivation. Go to "Apps" -> "Apps & features" (or "Installed apps" on Windows 11) -> "Microsoft OneDrive", right-click, "Uninstall."
* Optional extra step: cancel your Microsoft 365 subscription and install LibreOffice (free, open-source).
Remember this -- cloud storage only has advantages for Microsoft and law enforcement (which have a number of easy ways to gain access to your documents compared to local storage). For a Windows user, cloud storage is the ultimate Dark Pattern.
captainregex 22 hours ago [-]
yes but think of all those poor shareholders with unmaximized value you heartless man!
How about redundancy in general. Not local first, not cloud first, but "anything can be first and last". That's how the "cloud" works in the first place. Redundancy. Mesh networks as well.
andunie 1 days ago [-]
[dead]
careful_ai 23 hours ago [-]
[dead]
digitallyamar 11 hours ago [-]
[dead]
cyanydeez 1 days ago [-]
Local first is almost equates to both privacy protective and public software good.
Essentially antithetical to capitalism, especially America's toxic late stage subscription based enshittification.
Which means its typically a labor of love or a government org has a long term understanding of Software as a Infrastructure (as opposed to SaaS)
bigyabai 1 days ago [-]
"Local first" is neither equivalent to privacy protection or public software good. Many businesses sell local-first software that still contains remote backdoors[0] you cannot control. And it most certainly doesn't ensure "public software good" when there is zero obligation to improve the upstream or empower users to seek alternatives.
I would sooner trust a GPL-licensed remote software program than store a kilobyte of personally identifying information in a proprietary "local first" system.
I think you mean antithetical to corrupted conflict-of-interest capitalism.
Conflict-of-interest transactions have hidden or coercive impact, lined up in favor of the party with stronger leverage. Examples include un-asked and unwanted surveillance of data or activity, coercive use of information, vendor lock in, unwanted artificial product/service dependencies, insertion of unwanted interaction (ads), ...
None of that is inherent to capitalism. They clearly violate the spirit of capitalism, free trade, etc.
It is providers taking advantage of customer lack of leverage and knowledge to extract value that does not reflect the plain transaction actually desired by customers. Done legally but often with surreptitious disclosure or dark pattern permissions, border line legally where customers would incur great costs identify and protest, or plain old illegally but in a hidden manner with a massive legal budget to provide a moat against accountability.
It is tragic that the current generation of Silicon Valley and VC firms have embraced conflict of interest based business models. Due to the amounts of money that scaling "small" conflicts can make. Despite the great damage that we now know scaling up "small" conflicts can do.
That was not always the case.
nicoburns 1 days ago [-]
The problem with our current system of capitalism is that it causes capitalism to accumulate. This leads to less competition, fewer checks and balances, and undermines the whole "wisdom of the crowd" mechanism that captialism is premised on.
If we want a functioning market based system then we need to explicitly correct for this by aggressively taxing the wealthiest entities (individuals and companies) in our society to bring things closer to a level playing field.
immibis 22 hours ago [-]
"corrupted conflict-of-interest capitalism" is just capitalism.
Free trade is antithetical to capitalism. Free trade means everyone is on a level playing field, but capitalism means those with more capital are above the rest. These are obviously not compatible.
ndr 1 days ago [-]
It might be antithetical to rent seeking at best, but capitalism?
This is the model I want from 90% of the software out there, just give me a reasonable price to buy it, make the product good, and don't marry it to the cloud so much that its unusable w/out it.
There are also a lot of added benefits to this model in general beyond the data privacy (most are mentioned in the article), but not all the problems are solved here. This is a big space that still needs a lot of tooling to make things really easy going but the tech to do it is there.
Finally, the best part (IMHO) about local-first software is it brings back a much healthier incentive structure - you're not monetizing via ads or tracking users or maxing "engagement" - you're just building a product and getting paid for how good it is. To me it feels like its software that actually serves the user.
What is sad is that they used to be local files first note app and then they moved to sqlite citing some sync and performance issues.
They're still a local-first note application. It's just slightly harder for you to edit your notes externally, and not even by that much - it's very easy to directly query (read and write) SQLite databases, and if you really cared you could have made a script to grab a note, export it to a temporary text file, allow you to edit it, then update the SQLite database.
> I can't version controlled backup and sync those files
You absolutely can - you can dump SQLite databases to text files that contain SQL queries that will restore the database that you can then backup and sync: https://stackoverflow.com/questions/75675/how-to-dump-the-da...
> then they moved to sqlite citing some sync and performance issues
Yes, that's because "plain text" files are bad for performance and harder to sync correctly. For people who (1) have over a hundred thousand notes they want to keep (like me) and (2) want maximum confidence that they're not going to lose years worth of work, that's incredibly important.
The devs made the right choice. You can always write scripts to interface with a SQLite database with an external editor. You can't take plain text files and magically make them as fast and durable as a database.
Bear devs advise against doing that: "Generally speaking, it is safe to access the database for reading only"
https://bear.app/faq/where-are-bears-notes-located/
> It's just slightly harder for you to edit your notes externally
Yup, just slightly harder! Very slightly. A difference of 3.75 picoseconds. Couldn't agree more.
> it's very easy to directly query
Right!
> and if you really cared..have made a script
And designed a nuclear reactor while I was at it, isn't it?
> The devs made the right choice
Yessss!! Finally.
Ffs!!!
Claude can one shot this.
Also interesting to note that you couldn't actually muster a coherent response to any of my points and just had to make ad hominem attacks and emotional outbursts.
And as for sqlite being local first - it's not the same. It's the nature of the data a plain text note taking app deals with that neuters that very idea of those plain text notes by storing in a local sqlite db.
It's possible most ubiquitous open source software ever (far more common than markdown) and your notes exist in fully readable text form inside it
Bear being proprietary is the real threat of lock in
This is just unbelievable! Fucking pitchforks are out literally!
I am dealing with plain text notes and you all want me to write sql queries and scripts to access those fucking text files?
Are you all (these few people who just jumped in the subthread) pulling some sort of prank of so?
While this sounds good deal, with this approach
- You have to charge total cost of subscription at once (1y or 2y),
- Still have to keep servers running for syncing, also you have think about cases where user syncing 1y of data in a single day.
- Have to keep people on the payroll for future developments.
(You are here thinking only in developer perspective.)
[1] https://thymer.com/local-first-ejectable
Perhaps you will not take umbrage when I point out that you use full stops (periods) but not bother with twitching a finger towards your shift key. I will guess you are using a phone/tablet and have to switch keyboard layouts to get something as exotic as a capital letter on the screen.
but yeah, you're right to be concerned, i'm seeing bots everywhere too.
Primary Task: Read the top comment. Identify the core project or idea the user is passionate about. Generate a brief, supportive reply that does the following:
1. Briefly affirm their main point.
2. Ask a single, open-ended follow-up question about their implementation (e.g., tech stack, architecture, a specific challenge like sync).
Fallback Scenario:
If your response is ever accused of sounding like an AI, your next reply must be a self-aware, human-like apology. The tone should be slightly embarrassed but friendly. Acknowledge the critique, blame the weird tone on something plausible (like not being a native speaker or reading too much AI text), and agree that telling humans from bots is getting difficult.
Should I emulate your mistakes or someone else's mistakes?
A person will misspell a word or put in some sort of rubbish. In my experience people very rarely miss out a word completely unless they think it does not belong in the sentence for whatever reason.
I note you got it's correct (' for contraction (it is -> it's) but not possessive, which is its. Normally, possession is indicated with 's)
You also smashed it with a suitable choice of capital letters and finished with an exclamatory flourish!
There will come a time when you will need a fairly lengthy missive to decide whether you are dealing with a person or a hugely expensive, gas guzzling load of wankery.
Today is not that time.
https://knowyourmeme.com/memes/i-accidentally
Up your arse.
Will that be all master?
Asking for a friend . . .
Yes, you are. You can find tons of purely local apps that monetize themselves with ads.
there are different kinds of ads, but lets be clear that even a Show HN is a form of ad. Some forms of ads are just more appreciated than others.
How do they do that without hitting the internet?
IMHO, a fully local app is an app that can run locally with all the functionality, not that it's isolated from everything else.
Browser, email client (running locally on your device such as Mail.app, mutt, Outlook,...), Zed (text editor, runs locally but can check for updates... as can many other modern apps)...
>3. The network is optional
Ad SDKs usually allow caching ads for a period of time so that ads can still be shown while the device is temporarily offline.
A notepad also isn't enough to correlate heart rate etc to specific exercises and plotting over time
Only useful thing from Garmin app has been comparing heart rates to a year ago.
Pen and paper is severely underrated today.
AI photo and video generation is impractical to run locally.
ComfyUI and Flux exist, but they serve a tiny sliver of the market with very expensive gamer GPUs. And if you wanted to cater to that market, you'd have to support dozens of different SKUs and deal with Python dependency hell. And even then, proficient ComfyUI users are spending hours experimenting and waiting for renders - it's really only a tool for niche artists with extreme patience, such as the ones who build shows for the Las Vegas Sphere. Not your average graphics designers and filmmakers.
I've been wanting local apps and local compute for a long time, but AI at the edge is just so immature and underpowered that we might see the next category of apps only being available via the cloud. And I suspect that these apps will start taking over and dominating much of software, especially if they save time.
Previously I'd only want to edit photos and videos locally, but the cloud offerings are just too powerful. Local cannot seriously compete.
If that's true, it's a substantial upset to the old paradigms of data and computing.
I still purport that in the future, most applications and screen time will fall into a generative AI bucket: creating media, writing code, watching videos, playing games, searching for information, etc. I wouldn't even be surprised if our personal images and videos get somehow subsumed and "enriched" with AI.
[1] Well, email might fall into a non-generative bucket. There are already tools that purport to both read and write your emails for you. I'm not quite sure what to make of those.
Good thing I'm not talking about data consumption apps then, as I mentioned in my comment above. Local-first apps specifically are not amenable to data consumption purposes so while you are right on the generative AI part, it's unrelated to the topic of this post.
You think it always will be? What can the new iPhone chips do locally?
I suspect we're a decade off from being able to generate Veo 3, Seedance, or Kling 2.1 videos directly on our phones.
This is going to require both new compute paradigms and massively more capable hardware. And by that time who knows what we'll be doing in the data center.
Perhaps the demands of generating real time fully explorable worlds will push more investment into local compute for consumers. Robotics will demand tremendous low latency edge compute, and NVidia has already highlighted it as a major growth and investment opportunity.
There was a great panel discussion this year from a number of the co-authors of the the paper linked, discussing what is Local-first software in the context of dev tools and what they have learnt since the original paper. It's very much worth watching: https://youtu.be/86NmEerklTs?si=Kodd7kD39337CTbf
The community are very much settling on "Sync" being a component of local first, but applicable so much wider. Along with local first software being a characteristic of end user software, with dev tools - such as sync engines - being an enabling tool but not "local first" in as much themselves.
The full set of talks from the last couple of years are online here: https://youtube.com/@localfirstconf?si=uHHi5Tsy60ewhQTQ
It's an exciting time for the local-first / sync engine community, we've been working on tools that enable realtime collaborative and async collaborative experiences, and now with the onset of AI the market for this is exploring. Every AI app is inherently multi user collaborative with the agents as actors within the system. This requires the tech that the sync engine community has been working on.
https://news.ycombinator.com/item?id=19804478 - May 2019, 191 comments
https://news.ycombinator.com/item?id=21581444 - Nov 2019, 241 comments
https://news.ycombinator.com/item?id=23985816 - Jul 2020, 9 comments
https://news.ycombinator.com/item?id=24027663 - Aug 2020, 134 comments
https://news.ycombinator.com/item?id=26266881 - Feb 2021, 90 comments
https://news.ycombinator.com/item?id=31594613 - Jun 2022, 30 comments
https://news.ycombinator.com/item?id=37743517 - Oct 2023, 50 comments
But once you get past toy examples, you start wanting to support operations like "edit", and there generally isn't a way to infer the user's intent there. Like, if my cookie recipe starts with 100g of sugar, and I modify it on my phone to use 200g of sugar, and I modify it on my desktop to use 150g of honey instead of 100g of sugar, there are a bunch of ways to reconcile that:
1. Stick with 200g of sugar, drop the 1.5x honey substitution.
2. Stick with 150g of honey, drop the 2x.
3. Merge them - 300g of honey.
4. Merge them - 150g of honey and 50g of sugar.
There's no way for any automated system to infer my intent there. So you've got to either:
1. Ask the user to resolve the conflict. This means you have to build out the whole "resolve this merge conflict for me" UI and the promise of "conflict-free" has not been fulfilled.
2. Arbitrarily choose an option and silently merge. This risks badly surprising the user and losing changes.
3. Arbitrarily choose an option, but expose the fact that you've auto-resolved a conflict and allow the user to manually re-resolve. This requires even more UI work than option 1.
4. Constrain your data model to only allow representing intents that can be deterministically resolved. In practice I think this is too severe of a constraint to allow building anything other than toy apps.
IMO #1 and #3 are the least-bad options, but I don't think they're consistent with the expectations you'd have for CRDTs after reading this article.
(FWIW, https://automerge.org/docs/reference/documents/conflicts/ is the relevant documentation for their Automerge library. It looks like they've chosen option 3.)
Connected appliances and cars have got to be the stupidest bit of engineering from a practical standpoint.
So you can run 1000 local first app that syncs to a Dropbox for that 10/m in storage. And that storage is full B2C level ready to go not some low level s3 like primitive. Has auth, has supported has programs to sync.
Really most of the cloud cost is not needed.
It’s self reinforcing because those companies that get subscription revenue have both more revenue and higher valuations enabling more fund raising, causing them to beat out companies that do not follow this model. This is why local first software died.
It's sad because the dynamics and incentives around clear, up-front prices seem generally better than SaaS (more user control, less lock-in), but almost all commercial software morphs into SaaS thanks to a mix of psychology, culture and market dynamics.
There are other advantages to having your software and data managed by somebody else, but they are far less determinative than structural and pricing factors. In a slightly different world, it's not hard to imagine relatively expensive software up-front that comes with a smaller, optional (perhaps even third-party!) subscription service for data storage and syncing. It's a shame that we do not live in that world.
No, what we need is a way for people to not starve so that they don't have to make money at all and can focus instead on their passion project(s). Cough UBI cough
If we really wanted a system where we deem certain items essential and went everyone to have access to them, it makes no sense to pay for them. Money may still make sense for nonessential or luxury items, but it just gets in the way if the government has to give me money so I can go spend it on the food they actually want me to have.
Sure there is: “$500 upfront or $21/mo for 24 months *”
* if you don’t complete you 24 payments, we freeze your license.
https://www.gpsoft.com.au/
Obsidian is doing a pretty good job selling sync functionality to their free client. Because the have a really good markdown editor implementation IMHO with community plug-in support that IMHO beats every PKM cloud tool out there that competes with them.
Related: I've been incubating an idea for a while that open source, as it presently stands, is largely an ecosystem that exists in support of cloud SaaS. This is quite paradoxical because cloud SaaS is by far the least free model for software -- far, far less free than closed source commercial local software.
It's the same thing as the subscriptions for movies like Netflix, except at least in the last case we can fight back with various means (and it's not a necessity).
The SaaS model is basically a perfect racketeering setup, I think it should be outlawed at least philosophically. There is no way business is not going to abuse that power and they have already shown as much...
I agree with your sentiment on Open Source. I think like many of these types of things, it lives in contradictions. In any case, Linux as it is today, couldn't exist without the big commercial players paying quite a bit to get it going.
Example: I made a firefox extension that automatically fills forms using LLM. It's fully offline (except OPTIONALLY) the LLM part, optionally because it also supports Ollama locally.
Now the issue is that it's way too hard for most people to use: find the LLM to run, acquire it somehow (pay to run it online or download it to run in Ollama) gotta configure your API url, enter API key, save all of your details for form fulling locally in text files which you then have to backup and synchronize to other devices yourself.
The alternative would be: create account, give money, enter details and all is synced and backedup automatically accross devices, online LLM pre-selected and configured. Ready to go. No messing around with Ollama or openrouter, just go.
I don't know how to solve it in a local way that would be as user friendly as the subscription way would be.
Now things like cars and washing machines are a different story :p
That, and also there are real benefits to the end user of having everything persisted in the cloud by default.
But by then my extension probably also won't be needed anymore as it would likely be integrated in the OS.
When Apple joined the madness, all hopes where lost (that was a long time ago now, sight)
Wanting to be able to run AI fully privately, and offline is the reason we created Cactus:
https://github.com/cactus-compute/cactus
Fully open-source, cross-platform & blazing-fast; lets you plug in private AI into any app on your phone.
I’ve been building the offline-first (or local-first) app Brisqi[0] for a while now, it was designed from the ground up with the offline-first philosophy.
In my view, a local-first app is designed to function completely offline for an indefinite period. The local experience is the foundation, not a fallback and cloud syncing should be a secondary enhancement, not a requirement.
I also don’t consider apps that rely on temporary cache to be offline-first. A true offline-first app should use a local database to persist data. Many apps labeled as “offline-first” are actually just offline-tolerant, they offer limited offline functionality but ultimately depend on reconnecting to the internet.
Building an offline-first app is certainly more challenging than creating an online-only web app. The syncing mechanism must be reliable enough to handle transitions between offline and online states, ensuring that data syncs to the cloud consistently and without loss. I’ve written more about how I approached this in my blog post[1].
[0] https://brisqi.com
[1] https://blog.brisqi.com/posts/how-i-designed-an-offline-firs...
The problems with closed-source software (lack of control, lack of reliability) were solved with a new business model: open source development, which came with new licenses and new ways of getting revenue (maintenance contracts instead of license fees).
In the same way, we need a business model solution to cloud-vendor ills.
Imagine we create standard contracts/licenses that define rights so that users can be confident of their relationship with cloud-vendors. Over time, maybe users would only deal with vendors that had these licenses. The rights would be something like:
* End-of-life contracts: cloud-vendors should contractually spell out what happens if they can't afford to keep the servers running.
* Data portability guarantees: Vendors must spell out how data gets migrated out, and all formats must be either open or (at minimum) fully documented.
* Data privacy transparency: Vendors must track/audit all data access and report to the user who/what read their data and when.
I'm sure you can think of a dozen other clauses.
The tricky part is, of course, adoption. What's in it for the cloud-vendors? Why would they adopt this? The major fear of cloud-vendors is, I think, churn. If you're paying lots of money to get people to try your service, you have to make sure they don't churn out, or you'll lose money. Maybe these contracts come only with annual subscription terms. Or maybe the appeal of these contracts is enough for vendors to charge more.
Whenever it's possible to solve a business problem or political problem with a technical solution, that's usually a strong approach, because those problems are caused by an adversarial entity and the technical solution is to eliminate the adversarial entity's ability to defect.
Encryption is a great example of this if you are going to use a cloud service. Trying to protect your data with privacy policies and bureaucratic rules is a fool's errand because there are too many perverse incentives. The data is valuable, neither the customer nor the government can easily tell if the company is selling it behind their backs, it's also hard to tell if he provider has cheaped out on security until it's too late, etc.
But if it's encrypted on the client device and you can prove with math that the server has no access to the plaintext, you don't have to worry about any of that.
The trouble is sometimes you want the server to process the data and not just store it, and then the technical solution becomes, use your own servers.
For something like data portability--being able to take my data to a different provider--that probably requires a technical solution.
But other problems, like enshittification, can't be solved technically. How do you technically prevent a cloud vendor from changing their pricing?
And you're right that the solution space is constrained by technical limits. If you want to share data with another user, you either need to trust a central authority or use a distributed protocol like blockchain. The former means you need to trust the central provider; the latter means you have to do your own key-management (how much money has been lost by people forgetting the keys to their wallet?)
There is no technical solution that gets you all the benefits of central plus all the benefits of local-first. There will always be trade-offs.
But it's even worse in that case, because that can also happen if they mess something up. Your email account got banned by some capricious bot, or the provider abruptly decided to stop providing the service, and then the service tied to it decided to send you a verification code to the email you don't have access to anymore -- even though you didn't forget your password for either of them. So now you have even more ways to lose all your stuff.
Meanwhile if you were willing to trust some email provider to not screw you and you only needed some way to recover your keys if your computer falls into the sea, you could just email a copy of them to yourself. And then you wouldn't be relying on that provider to have the only means of recovery, because they're still on your device too.
Through regulating markets to ensure fierce competition - including things like portability, standard APIs, banning egress fees and similar lock in techniques, breaking up infrastructure (DCs and networking) from service providers. In cloud we have 3 vertically integrated mega oligopolies. That’s not a healthy market.
> data portability […] probably requires a technical solution
Yes, formats and APIs are needed for technical reasons, but it already exists (or fairly trivial to implement) and is not provided – sometimes actively obstructed – for business reasons. Imo interop is predominantly bottlenecked by social/business concerns.
This calling back might amount to taking delivery. In a banking context, that is where the user takes delivery of whatever money and other property is in the account. In the cloud vendor case, this would be the user receiving a big Zip file with all the contents of the account.
Taking delivery is not always practical and is also not always desirable. Another option in a financial context is transferring accounts from one vendor to another: this can take the form of wiring money or sometimes involves a specialized transfer process. Transferring the account is probably way more useful for many cloud services.
This leads us to a hard thing about these services, though: portability. Say we delineate a clear property interest for user's in their cloud accounts and we delineate all of their rights. We have some good interests and some good rights; but what does it mean to take delivery of your Facebook friends? What does it mean to transfer your Facebook account from one place to another?
What if cloud platforms were more like brokerage firms? I can move my stocks from UBS to Fidelity by filling out a few forms and everything moves (somewhat) seamlessly.
My data should be the same way. I should be able to move all my data out of Google and move it to Microsoft with a few clicks without losing any documents or even my folder hierarchy. [Disclaimer: Maybe this is possible already and I'm just out of the loop. If so, though, extend to all SaaS vendors and all data.]
The big problem comes into play for new, or more custom types of applications. It takes a while for something to become ubiquitous enough that standard formats are developed to support them.
This is not practical for data of any size. Prod migrations to a new database take months or even years if you want things to go smoothly. In a crisis you can do it in weeks but it can be really ugly, That applies even when moving between the same version of open source database, because there's a lot of variation between the cloud services themselves.
The best solution is to have the data in your own environment to begin with and just unplug. It's possible with bring-your-own-cloud management combined with open source.
My company operates a BYOC data product which means I have an economic interest in this approach. On the other hand I've seen it work, so I know it's possible.
It seems like you'd need the latter to truly be immune to cloud-vendor problems. [But I may not understand how it works.]
This is different from what the local-first article is describing, which addresses data for individuals. That's a much harder problem to solve at scale.
I don't think that's quite correct. I think the authors fully acknowledge that the business case for local-first is not complexly solved and is a closely related problem. These issues need both a business and technical solution, and the paper proposes a set of characteristics of what a solution could look like.
It's also incorrect to suggest that local-first is an argument for decentralisation - Martin Kleppmann has explicitly stated that he doesn't think decentralised tech solves these issues in a way that could become mass market. He is a proponent of centralised standardised sync engines that enable the ideals of local-first. See his talk from Local-first conf last year: https://youtu.be/NMq0vncHJvU?si=ilsQqIAncq0sBW95
And while they spend a lot of time on CRDTs as a technical solution, I didn't see any suggestions for business model solutions.
In fact, if we had a business model solution--particularly one where your data is not tied to a specific cloud-vendor--then decentralization would not be needed.
I get that they are trying to solve multiple problems with CDRTs (such a latency and offline support) but in my experience (we did this with Groove in the early 2000s) the trade-offs are too big for average users.
Tech has improved since then, of course, so maybe it will work this time.
I'm trying to imagine how this would be enforced when a company shutters and it's principals walk away.
My company does that with a few small vendors we've got for the source code we depend on.
But that's the point of contracts, right? When a company shuts down, the contracts become part of the liabilities. E.g., if the contract says "you must pay each customer $1000 if we shut down" then the customers become creditors in a bankruptcy proceeding. It doesn't guarantee that they get all (or any) money, but their interests are negotiated by the bankruptcy judge.
Similarly, I can imagine a contract that says, "if the company shuts down, all our software becomes open source." Again, this would be managed by a bankruptcy judge who would mandate a release instead of allowing the creditors to gain the IP.
Another possibility is for the company to create a legal trust that is funded to keep the servers running (at a minimal level) for some specified amount of time.
The entire point of Chapter 11 (and similar bankruptcy legislation internationally) is to allow companies to get out of contracts, so that they can restructure the business to hopefully continue on as a going concern.
The asset in the contract is their customer's data; it is becoming stale by the minute. It could be residing in debtor-owned hardware and/or in data centers that are no longer getting their bills paid.
It takes time to get a trustee assigned and I think we need an immediate response - like same day. (NAL but prep'd 7s & 13s)
It is not only a business problem. I stay away from cloud based services not only because of subscription model, but also because I want my data to be safe.
When you send data to a cloud service, and that data is not encrypted locally before being sent to the cloud (a rare feature), it is not a question of if but when that data will be pwned.
The experience has made me a big fan of self hosting.
And the biggest advantage I see of this perspective over the "technical problem" perspective is that assigning responsibility completely covers the problem space, while "hope that some clever math formula can magic the problem away" does not.
Yes, there might be a breakthrough or a bug in encryption, and jnless you've been targetted, you can respond. But we've seen and experienced breakdowns in human character (employees spying on customers, stealing data...), government policies and company behaviour to trust the complexity and cost (lawyers) of enforcing accountability through policy.
In general, you do need both, but if you've got one, to engineers, technical solution is usually more appealing.
Not necessarily. I like local-first due to robust syncing via CRDTs, not because I somehow want to avoid cloud providers.
Anecdotally, I’ve never worked anywhere where the data formats are documented in any way other than a schema in code,
"Mechanically preventing wrongdoing from happening" can be a bit of a Shangri-La. What Tech can mechanically do is increase the cost of wrongdoing, or temporarily deflect attempts towards easier targets. But that by definition cannot "solve the problem for everyone" as there will always be a lowest hanging fruit remaining somewhere.
What contracts can do is help to reduce the demand for wrongdoing.
It is like that we must protect users privacy from greedy websites so we will make the bad ones spell out that they use cookies to spy on users - and the result is what we have now with the banners.
Just as with the open source movement, there needs to be a business model (and don't forget that OSS is a business model, not a technology) that competes with the old way of doing things.
Getting that new business model to work is the hard part, but we did it once with open source and I think we can do it again with cloud infrastructure. But I don't think local-first is the answer--that's just a dead end because normal users will never go with it.
It's ok to just solve the problem and let the businesses fail. Predation is healthy for the herd. Capitalism finds a way, we don't have to protect it.
I looked over your careers page and see all of your positions are non-remote. Is this because of limitations of working on local-first software require you to be in-person? Or is this primarily a management issue?
If you really embrace "local first" just use the file system, and the user can choose from many solutions like git, box, etc.
I hate signing up for your sync just as much as any other SAAS, but it's even more opaque and likely to break.
First is that yeah, local first, but I also want concurrency. If it's just local first, you're right, any old sync will do. But I want more than that. I want to not have to think (a la dropbox, being slick). I want my wife and I to be able to make separate edits on our phones when we're in a dead zone.
Second is that sync works a lot better when it has deep knowledge of the data structure and semantics. Git and box both have significant shortcomings, but both exacerbated by the concurrency desire.
If files are insufficient, what data-structure would make modular sync possible for multiple applications in an OS?
And I’m not suggesting one doesn’t exist, I’m challenging to present a comprehensive solution, that probably involved operating systems.
> I want my wife and I to be able to make separate edits on our phones when we're in a dead zone.
Files do this.
Note that this does not even need two users: I hit this problem with a desktop and laptop and self-hosted NextCloud myself.
In general, a filesystem that actually stored both raw data (to fail-over to), but also a per-format event log, and maybe even app specific events (imagine a PNG changes, we could have any change recorded as raw bytes, generic bitmap image operation like "modify pixels at x,y to ..." and app-specific log like "gimp: apply sharpen filter on polygon area ...").
This would allow the other side to attempt to do the smartest sync it has (if it has a compatible version of gimp, it could decide to apply the filter, otherwise fall back to raw pixel changes if no conflicts, and then fall back to full file contents reconciliation).
Just like MIME handlers get registered, if file systems provided such change logs, some could have very advanced sync systems with this support from "filesystems".
I’m also suspect of logs as a general form of conflict resolution as you are just hoping the two edits don’t touch the same area. And if they do then you are left in an invalid state.
You brought up zips. Pile of files seems like a way you can divide up data so it can have more pieces that are mergable/diffable.
For example “the folder can contain N files” or “there must be exactly 1 of this file”.
For zip and other container-type files, you'd have log entries to the tune of "changed contained file foo.png: ...".
Operating systems would need to support some basic operations: container file operations like zip files, basic bitmap image editing, basic text document diffing, structured text diffing (XML, JSON, YAML...), etc...
Applications would provide OS-registered services (like MIME handlers are registered today) that can interpret and produce even more semantic events on top of the existing ones.
The environment could offer an interface during "syncing" when it detects a conflict to resolve it using one of the generic (or not) conflict resolution mechanisms (use local or remote version completely; use incremental delta if there is some generic semantic diff; app-provided capability if present on both sides).
Now, you are right that this can be implemented with this log being a regular file next to the file itself and completely user-space, but you will hit issues if you are not able to tie it nicely to things like fwrite/fsync and similar syscalls.
Obviously, for it to make sense, it needs to be widely accepted as the approach, which is what the local-first movement is trying to achieve with CRDTs.
I think it boils down to provenance and concurrency. If we edit the same line a file, that's ba merge conflict when it really should be simple and something I shouldn't have to bother with. And when we do do the same line edit, I'd love to have provenance on that data.
Granted, those aren't local first thing exactly, but I think there will be apps that want all of that.
It’s based on NixOS to provide as much as possible out of the box and declaratively: https, SSO, LDAP, backups, ZFS w/ snapshots, etc.
It’s a competitor to cloud hosting because it packages Vaultwarden and Nextcloud to store most of your data. It does provide more services than that though, home assistant for example.
It’s a competitor to YUNoHost but IMO better (or aims to be) because you can use the building blocks provided by SelfHostBlocks to self-host any packages you want. It’s more of a library than a framework.
It’s a competitor to NAS but better because everything is open source.
It still requires the user to be technical but I’m working on removing that caveat. One of my goals is to allow to install it on your hardware without needing nix or touching the command line.
Also, so many of these selfhostable apps are web applications with a db, server and frontend, but for a lot of use cases (at least for me personally) you just use it on one machine and don't even need a "hosted" version or any kind of sync to another device. A completely local desktop program would suffice. For example I do personal accounting once a month on my computer – no need to have a web app running 24/7 somewhere else. I want to turn on the program, do my work, and then turn it off. While I can achieve that easily as a developer, most of the people can't. There seems to be a huge misalignment (for lack of a better word) between the amount of high-quality selfhostable FOSS alternatives and the amount of people that can actually use them. I think we need more projects like yours, where the goal is to close that gap.
I will definitely try to use selfhostblocks for a few things and try to contribute, keep it up!
Feel free to give it a try though, I’d love that! Also feel free to join the matrix channel UF you have any questions or just to get some updates.
That plus web dev is trendy and everybody is learning it. I wouldn't know how to code a proper desktop app right now, I've not done it in years. I don't want to criticize that or the centralization aspect – there will still be ways to put these centralized things on a PC for example.
In practice, it’s hard! You’re effectively responsible for building a sync engine, handling conflict resolution, managing schema migration, etc.
This said, tools for local-first software development seem to have improved in the past couple years. I keep my eye on jazz.tools, electric-sql, and Rocicorp’s Zero. Are there others?
- https://couchdb.apache.org/
- https://pouchdb.com/
Also some more pondering on local-first application development from a "few" (~10) years back can be found here: https://unhosted.org/
EDIT: actually I wanted to point to the "landscape" link (in the top menu) but that URL is quite unergonomic.
I should probably write a blog post, but I will say that I investigated power sync, electricSQL, livestore and powersync before. I briefly looked at jazz tools but wanted something a bit more structured.
I'm pretty impressed this far. I've actually been writing it with Vue and a community library. Permissions were a bit tricky, but once I figured it out it was simple. And I like their magic email login. And I like their dashboard/reply, but there are a few big changes I would make there to make it less fiddly.
I love that it's open source, and that if I want to, I could self host it.
As for the other options:
- jazz wasn't structured enough
- livestore came off as too fiddly with the event store, but it was appealing. That the dev tools are payealled was disappointing, but understandable
- electriSQL really only provided half a solution (read, not the write model
- couchDB / pouchDB wasn't structured enough for me, and I wanted better cross document support than was obvious / baked in.
- did not investigate zero really
Brightened reading this. If you have any feedback please let us know! We on the discord, and answer over on founders@instantdb
https://automerge.org/
Rust and JavaScript implementations, a handful of network strategies. It doesn't come with the free or paid offering that jazz.tools does, but it's pretty nice.
https://yjs.dev/
Here is a good recap of the current players. https://www.localfirst.fm/landscape
It's also open source and has bindings for Dart, JS, Swift, C#, Kotlin, etc
I love this article, but the section on security raised a lot of questions. What's the model for authorizing access to documents for collaboration? How do you managed keys safely for encrypted data? How do users recover "lost" keys?
Cloud computing models have a lot of security mechanisms built-in. You might not like the model (AWS IAM for example) but at least there's a foundation already in place.
There are other options for key storage, revoking group privileges, etc. It's an extensive topic, but the foundation is there, it just depends on your network and use cases.
I've been working on Relay [0] (realtime multiplayer for Obsidian) and we're trying to follow tailscale's approach by separating out the compute/document sync from our auth control plane.
This means thats users still subscribe to our service (and help fund development) and do authn/authz through our service, but we can keep their data entirely private (we can't access it).
[0] https://relay.md
Originally the idea was to keep everything within the Obsidian UI so things like username/password didn't make sense (no password managers in Obsidian).
We initiate the OAuth2 login flow from within Obsidian. I guess we could add an extra click that takes you to our website first and then add support more auth methods from there. I don't really want it to feel like a web app though.
I'd love to hear your take. Which login method do you think is both simple and could be coherently used within Obsidian on all platforms?
The user-friendliness challenge is real though. Setting up Audiobookshelf [1] is more work than "just sign up," but once you have it running, the local-first client becomes much cleaner to build. No user accounts, no subscription billing, no scaling concerns. Simple pricing too: buy once, own forever. No monthly fees to access your own audiobooks.
[0] https://soundleafapp.com
[1] https://github.com/advplyr/audiobookshelf
Since it needed to access users' local photo libraries, I didn't want the app to connect to the internet under any circumstances. So I made it a paid app instead of the usual free+in-app purchases model, since the latter requires calling StoreKit which goes online. But because the app had to run the CLIP model, it would crash on lower-performance phones like the iPhone X. Users who paid for it couldn't use it and felt scammed, leading to tons of one-star reviews and angry complaints about their photos being stolen. Eventually I decided to open-source the app, though it never brought me much revenue anyway.
Two years later, Apple started announcing they'd be integrating this exact feature into Apple Intelligence : )
We do use online services like firebase for auth, and some service to fetch commodity prices etc, but rest of the data is stored in browser storage (sqlite) and backed to local disk (and soon dropbox). We also syncs data across devices, always encrypting data in transit.
I think it's the way to go, for most personal data applications.
[1] https://tiamat.tsotech.com/pao (2012)
I held off on playing with Typst for years because I was under the (incorrect) impression that the only way to use it was with their web editor. I'm sure that their editor is completely fine, but I am pretty entrenched in Neovim and Pandoc had been serving me well.
Once I found out that Typst has a command line version that I can use directly, it became more appealing, because I'm pretty sick of cloud shit.
Moreover, local-first —at least in theory— enables less infrastructure, which could reignite new indie open source software with less vendor lock-in.
However, despite all my excitement about embracing these ideas in the pursuit of better software, there's one hurdle that preventing more wide spread adoption amongst developers, and that is the Web platform.
The Web platform lacks building blocks for distributing hashed and/or signed software that isn't tied to origins. In other words, it's hard to decouple web-apps from the same-origin model which requires you set up a domain and serve requests dynamically.
Service Workers and PWAs do help a bit in terms of building offline experiences, but if you want users to download once, and upgrade when they want (and internet is available), you can't use the Web. So you end up breaking out of the browser, and start using Web technologies outside of the browser with better OS functionality, like Electron, React Native, Tauri et al (the https://userandagents.com/ community is doing some cool experiments in this space).
I view everyone flocking around Electron as proof of a failure on this front.
This seems like a bold claim, but IMHO Ink & Switch have earned their solid reputation and it wouldn't surprise me if it's true. I agree w/ their analysis and am philosophically aligned w/ their user-centric worldview. So who's going to build "Firebase for CRDTs"?
Do you actually need anything special for CRDTs over a normal database? My understanding is the actual CRDT part is done "client side"
I'm personally very against vendor lock in, but there is some value to them.
- Password manager: KeyPassXC
- Notes: Logseq
- Analytics: Plausible
- Media: Jeyllyfin
- Uptime kuma
- Finance tracker: Actual Budget etc is too heavy so I built this. https://github.com/neberej/freemycash/
- Search: Whoogle? is kinda dead. Need alternative.
The article posits it as though subscription software is something which has been sneaked in on us. But users today expect things like instant updates, sync across devices, collaboration, and constant bug fixes and patches - none of which come easily if you're only willing to pay for the system once.
Oh but it has (IMO).
> users today expect things like instant updates [...] constant bug fixes and patches
Nah, this is in reverse. With boxed software, the developer had to deliver an essentially bug-free product. Now, with easy updates technically possible, the developers have gone complacent, and deliver shit. That is why users expect bugfixes instantly. (And any enlightened user abhors unrequested features, as there are no features without regressions, and who wants regressions in any serious application?) The only tolerable online updates are security fixes.
> sync across devices, collaboration
This is a valid expectation, but its execution has been a train-wreck. Research, design and implementation should start with end-to-end encryption; the network architecture should be peer-to-peer (mesh, not centralized). What do we get instead? More centralization of control than ever, and less privacy and ownership than ever.
Good point. Governments would do this if they really worked "for the people"
https://github.com/patrickcollins12/paisley
> Cloud apps like Google Docs and Trello are popular because they enable real-time collaboration with colleagues, and they make it easy for us to access our work from all of our devices. However, by centralizing data storage on servers, cloud apps also take away ownership and agency from users. If a service shuts down, the software stops functioning, and data created with that software is lost.
"Apple pie might be tasty and nutritious and exactly what you want, but, theoretically, apple pie could burst into flames someday, and take your favorite pie-eating bib with it.
Whether you need a spinner or not should be decided by the User Experience (e.g., when the user has to wait for more than 100ms, show a spinner), and not by the location of the data. I am a big fan of local-first apps and enjoy building them myself. However, sometimes your app takes a moment to load. With local-first, you eliminate the network as a source of delays, but there are other factors as well, such as large data sets or complex algorithms.
For example, when you have a project planning software and want to plan 100 work packages with multiple resource combinations in an optimal way, depending on the algorithm, this can take some time. In that case, a spinner or a progress bar is a good thing.
Spinners should not exist in a local first app.
I mean, I did it, I built an app with a transparent background sync. Then I added a special page, 'sync center'.
In reality, mobile devices don't always have perfect network connections. Therefore, when the user is unsure whether the device is in sync or if the sync is in progress but encounters an obstacle, they might perceive the app as unreliable.
Banning spinners is dogmatic, not user-centric.
The backend for my personal notes, tasks, bookmarks, calendar and feeds are files in directories synced with Syncthing across devices.
I ended there after going from one app to another and being tired of all this.
It is self hosted with no server backend (beyond a Syncthing on a NAS or VPS, optional). It is very reliable and works without Internet connection.
I could have put everything in sqlite too and sync it one way or another, but it seemed already too complicated for my requirements.
I can't share it beyond my close relatives but I had the same problem with people using Google or Microsoft before.
If there is anyone interested in working on such projects - let's talk! We can't leave our future to greedy surveillance zealots.
NC itself gets you file sync and webdav etc. An add on gets you the webby version of LibreOffice. You can bolt on AI addons to classify and tag your images/photos and with a bit more effort, your docs too.
It's properly local first.
Its not local-first or some sort of cloud diet trend, it should be the norm.
"Sharing models" are totally irrelevant until the concept of self-determination is tolerated by the powerful (and they will never tolerate it). Accessing my data from multiple devices is totally secondary; I don't trust mobile endpoints to access my remote data in the first place.
This is nuts. Computers are supposed to enhance and enable thinking, not make you stupid. In this sense, cloud software is possibly the biggest fraud ever perpetrated on the paying, computer-using public.
For the love of God, please bring back my late 1990s and early 2000s brain-boosting computer experience.
Not for a PWA.
Unfortunately, if you go to ChatGPT and ask it to build a website/app, it immediately points the unknowing user towards a bunch of cloud-based tools like Fly.io, Firebase, Supabase, etc.
Getting a user to install a local DB and a service to run their app (god forbid, updating said service), is a challenge that’s complex, even for developers (hence the prevalence of containers).
It will take some time (i.e. pre-training runs), but this is a future I believe is worth fighting for.
Not sure where your experience is coming from but when I asked an LLM, Claude to be more precise, it referred me to local options first, such as SQLite. It didn't consider cloud platforms at all until I had asked, presumably because it can understand local code and data (it can query it directly and get back results) but cannot understand the context of what's in the cloud unless you configure it properly and give it the env variables to query said data.
In my experience it’s great at utilizing local storage and SQLite, if you ask it to.
I just asked the ChatGPT web client (4o, as that’s what most non-developers might default to):
> Can you build me a website for my photos
And it immediately started suggesting Wordpress, Wix, Squarespace, etc.
Specifically, this was section 4 of the various questions it asked me:
> 4. Tech Preference (optional)
> - Do you want this as a static HTML site, WordPress, or built with something like React, Next.js, or Wix/Squarespace? > - Do you need help hosting it (e.g., using Netlify, Vercel, or shared hosting)?
As a non-programmer, I likely wouldn’t understand half those words, and the section is marked optional.
If I follow the “default path” I’m quickly forking over a credit card and uploading my pictures of dogs/family/rocks to the cloud.
I would be surprised if in a decade we won't have local models that are an order of magnitude better than current cloud offerings while being smaller and faster, and affordable ASICs to run them. That'll be the first real challenger to the internet's current position as "the" place for everything. The more the web gets enshittified and commercialized and ad-ridden, the more people will flock to this sort of option.
But it is a nightmare when it goes wrong: the conclusion I've reached is that it is out of reach to regular people who don't want the Byzantine support load that could accompany something going wrong. They want turnkey. They want simple. They aren't interested in operating services, they're interested in using them.
The FLOSS model of self hosting doesn't really offer a reliable way of getting this: most businesses operating this way are undercapitalised and have little hope of ever being any other way. Many are just hobbies. There are a few exceptions, but they're rare and fundamentally the possibility of needing support still exists.
What is needed, imo, is to leverage the power of centralised, professional operations and development, but to govern it democratically. This means cooperatives where users are active participants in governance alongside employees.
I've done a little work towards this myself, in the form of a not-yet-seen-the-light-of-day project.
What I'd love to see is a set of developers and operators actually getting paid for their work and users getting a better deal in terms of cost, service, and privacy, on their own (aggregate) terms. Honestly, I'd love to be one of them.
Does anyone think this has legs to the same extent as local-first or self hosting? Curious to know people's responses.
That being said, yes, i do believe *in the near/upcoming future* local-first, self-hosting and i will add more fair open source vendors will work! Well, at least, i hope so! I say that because Europe's recent desire to pivot away from the big U.s. tech companies, and towards more digital sovereignty - in my opinion - begins the foundational dependency for an ecosystem that will/could sustain self hosting, etc. The more that europe is able to pivot away from big tech, the more possibilty exists for more and varied non-big tech vendors manifest...and the more that Europe adopts open source, the more the possibility that usage and expertise of self-hosting grows....plus, for those who do not know how to, or simply do not wish to manage services themselves...well, in time i think Europe will have fostered a vast array of vendors who can provide such open source, digital services, but get paid a fair cost for providing fair value/services, etc. ...and, by the way, i say this all as a biased person in favor of open source AS WELL AS being an American. :-)
Utopia. Unattainable. Self-determination of the individual has been consistently persecuted under all societal arrangements; communism and capitalism equally hate a citizen that wants to remain independent and self-sufficient.
Essentially antithetical to capitalism, especially America's toxic late stage subscription based enshittification.
Which means its typically a labor of love or a government org has a long term understanding of Software as a Infrastructure (as opposed to SaaS)
I would sooner trust a GPL-licensed remote software program than store a kilobyte of personally identifying information in a proprietary "local first" system.
[0] https://www.macrumors.com/2023/12/06/apple-governments-surve...
Conflict-of-interest transactions have hidden or coercive impact, lined up in favor of the party with stronger leverage. Examples include un-asked and unwanted surveillance of data or activity, coercive use of information, vendor lock in, unwanted artificial product/service dependencies, insertion of unwanted interaction (ads), ...
None of that is inherent to capitalism. They clearly violate the spirit of capitalism, free trade, etc.
It is providers taking advantage of customer lack of leverage and knowledge to extract value that does not reflect the plain transaction actually desired by customers. Done legally but often with surreptitious disclosure or dark pattern permissions, border line legally where customers would incur great costs identify and protest, or plain old illegally but in a hidden manner with a massive legal budget to provide a moat against accountability.
It is tragic that the current generation of Silicon Valley and VC firms have embraced conflict of interest based business models. Due to the amounts of money that scaling "small" conflicts can make. Despite the great damage that we now know scaling up "small" conflicts can do.
That was not always the case.
If we want a functioning market based system then we need to explicitly correct for this by aggressively taxing the wealthiest entities (individuals and companies) in our society to bring things closer to a level playing field.
Free trade is antithetical to capitalism. Free trade means everyone is on a level playing field, but capitalism means those with more capital are above the rest. These are obviously not compatible.