Advertisement
Advertisement
β‘ Community Insights
Discussion Sentiment
78% Positive
Analyzed from 1894 words in the discussion.
Trending Topics
#stream#code#software#htmx#project#llm#old#where#claude#api
Discussion Sentiment
Analyzed from 1894 words in the discussion.
Trending Topics
Discussion (49 Comments)Read Original on HackerNews
I'm hesitant to even take a look at this project due to the whole "vibe coded in 3 weeks" thing, though. Hearing that says to me that this is not serious or battle-tested and might go unmaintained or such. Do you think these are valid concerns to have?
Planning, design, management alignment, finding customers, integrating with other products, waiting for review, etc. Basically all the human stuff that can't be automated away.
Your comment reminds me to add building a support team to the list.
Good software is expensive regardless of the involvement of LLMs because you need someone to take responsibility. Large companies will save a buck because there may be fewer people needed to take said responsibility, but it's probably a marginal saving compared to the overall scheme of things.
Last time I βvibe codedβ something (internal) and I liked it because I couldnβt find external solution.
I admire coders who can finish their code into deliverable and usable piece.
Issue here is software abundance and ppl will start to hesitate due to absurd pile that they should evaluate.
It reminds me the statistics of ice cream global sales. People want certainty so they choose chocolate or vanilla :)
Therefore many good software projects will have a problem to find users.
You can just vibe code it yourself. If your requirements are narrower (eg. you only need support for 3 networks and not 12), you will end up with something that takes less time to develop (possibly less than a day), it will have a smaller surface for problems, and it will be much better tailored to your specific needs. If you pay attention to what the LLM is doing it will also be easier to maintain or extend further.
The surface for security vulnerabilities also gets narrower, since you "only" have to trust the LLM (which is still a huge ask, but still better than LLM + 1 random person).
E.g. Buffer charges around $50 per year per social media account, which gives you an unlimited number of collaborating user accounts. And their single user plans are even cheaper.
I don't see how self-hosting would be a worthy investment of your time/effort in this case, unless you are in some grossly mismanaged organization where you have several devops engineers paid for doing literally nothing.
Before writing any code, I spent time on detailed specs, an architecture doc, and a style guide. All public: https://github.com/brightbeanxyz/brightbean-studio/tree/main...
I broke the specs into tasks that could run in parallel across multiple agents versus tasks with dependencies that had to merge first. This planning step was the whole game. Without it, the agents produce a mess.
I used Opus 4.6 (Claude Code) for planning and building the first pass of backend and UI. Opus holds large context better and makes architectural decisions across files more reliably. Then I used Codex 5.3 to challenge every implementation, surface security issues, and catch bugs. Token spend was roughly even between the two.
Where AI coding worked well: Django models, views, serializers, standard CRUD. Provider modules for well-documented APIs like Facebook and LinkedIn. Tailwind layouts and HTMX interactions. Test generation. Cross-file refactoring, where Opus was particularly good at cascading changes across models, views, and templates when I restructured the permission system.
Where it fell apart: TikTok's Content Posting API has poor docs and an unusual two-step upload flow. Both tools generated wrong code confidently, over and over. Multi-tenant permission logic produced code that worked for a single workspace but leaked data across tenants in multi-workspace setups. These bugs passed tests, which is what made them dangerous. OAuth edge cases like token refresh, revoked permissions, and platform-specific error codes all needed manual work. Happy path was fine, defensive code was not. Background task orchestration (retry logic, rate-limit backoff, error handling) also required writing by hand.
One thing I underestimated: Without dedicated UI designs, getting a consistent UX was brutal. All the functionality was there, but screens were unintuitive and some flows weren't reachable through the UI at all. 80% of features worked in 20% of the time. The remaining 80% went to polish and making the experience actually usable.
The project is open source under AGPL-3.0. 12 platform integrations, all first-party APIs. Django 5.x + HTMX + Alpine.js + Tailwind CSS 4 + PostgreSQL. No Redis. Docker Compose deploy, 4 containers.
Ask me anything about the spec-driven approach, platform API quirks, or how I split work between the two models.
I ask, not to condemn, but to find out what your process was for developing the requirements. Clearly it was done with LLM help but what was the refinement process?
One main thing I did was to use the deep research feature of Claude to get a good understanding of what other tools are offering (features, integrations etc.)
Then each feature in the specs document got refined with manual suggestions and screenshots of other tools that I took.
1. You mentioned developing tasks in parallelβhow many agents were you actually running at the same time? Did you ever reach a point where, even if you increased the degree of parallelism, merging and reviews became the bottleneck, and increasing the number further didnβt speed things up?
2. I really relate to the idea of β80% of features in 20% of the time, then 80% on polish.β Did you use AI for this final polishing phase as well? In other words, did you show the AI screenshots of the screens and explain them? Also, when looking back, do you feel that if you had written the initial specifications more carefully, you could have completed the work faster?
First I triggered all work streams per layer and brought them to a level of completion I was happy with. Then you merge one after another (challenge in github with the @codex the implementation and rebases when you move to the next work stream.
This is roughly how it looked like:
Layer 0 - Project Scaffolding
Layer 1 β Core Features Stream A β Content Pipeline Stream B β Social Platform Providers Stream C β Media Library Stream D β Notification System Stream E β Settings UI
Layer 2 β Collaboration & Engagement Stream F β Approval & Client Portal Stream G β Inbox Stream H β Calendar & Composer Enhancements Stream I β Client Onboarding Thus I did run up to 4 agents in parallel, but o be honest this is the max level of parallelism my brain was able to handle, I really felt like the bottleneck here.Additionally, your token usage is very high since you are having so many agent do work at the same time, hence I very often reached my claude session token limits and had to wait for the next session to begin (I do have the 5x Max plan)
Also calling HTMX old makes me feel old.
It's simple, it works, it's efficient, safe, and there are tons of online resources for it. Excellent choice, even more so when using a coding agent.
And for hmtx I simply wanted to have something lightweight that is not very invasive to keep things simple and dependencies low.
In my head this was a good consideration to keep complexity low for my AI agents :-)
Svelte even older (2016, SvelteKit was just an new version in 2022)
SQLAlchemy is ancient (2006)
Use newer tech, like HTMX (2020)
(/s obviously)
What you describe has also been my experience so far with building projects mostly with AI but with detailed specs but Rails instead of Django.
Questions: why no X? Do you have a feature to resize (summarize?) to the text to fit into short boxes?
If you're building anything serious and your data integrity is important, use Postgres.
Postgres is much stricter, and always was. MySQL tried to introduce several strict modes to mitigate the problems that they had, but I would always recommend to use Postgres.