Joshe

Member
  • Content count

    2,406
  • Joined

  • Last visited

Everything posted by Joshe

  1. Cognitive decline due to AI usage would mostly only occur in minds still under development. Even unserious thinkers who believe everything the AI tells them will be made more intelligent by AI, not less.
  2. Yes, dropping agents into an existing (brownfield) production codebase is very risky, but a lot of that risk can be mitigated with good strategy. Getting agents up and running in a brownfield codebase is largely a project management and system design problem that requires lots of iteration and creativity. The problem is largely solvable, you just have to figure it out. You could instruct CC to write comprehensive test coverage for every database operation. Before anything touches a production db, you'd have a test suite confirming everything works right in staging. That's just one guardrail you could build into your agentic workflow.
  3. You don't need 80% of that legacy PHP for this use case. Talk about bloated slop. It's funny you guys are invoking code quality while defending Wordpress-adjacent legacy PHP.
  4. I never liked Bootstrap either. I use Tailwind for every project. TW is easier to manage since v4 because you don't have to deal with the tailwind.config anymore. It's still a pain to set up with build tools like Vite but once you get it set up, just make a boilerplate and then every new project is smooth sailing. I built a large site with Astro recently. Astro is pretty damn nice.
  5. @Leo Gura I have no wish for AGI or care about any predictions because I lack the deep technical expertise to have an opinion. I'd argue against the cognitive decline point and say it's true for the majority, but not all. But I was saying we're at a point where intelligent AI orchestration can build complex software and the barrier to entry has recently dropped far, far below what it was 6, or even 3 months ago. An intelligent person could now use AI to brute force most existing software because 80-90% of software exists to move data around, display it, let users interact with it, etc, and AI understands all these patterns very well. This very forum is just a data model with a UI on top. It wouldn't be ambitious to build a better version in a week. I can see where you're coming from if you're looking at LLMs through the lens of chatbots. It does seem chatbots in general have stagnated. But the most significant developments happening now are with agents, and they're huge. And they definitely shouldn't be conflated with chatbots.
  6. A working prototype of Figma built in a weekend with strategic AI orchestration has no value? lol. If it allows millions of people to stop paying for Figma, it has a ton of value. Also, my main point wasn't about the application itself so much as it was the implication of what's now possible. Let's not forget that almost all software starts out as slop and is refined over time, regardless who makes it. "Done is better than perfect". "Move fast and break things". Whoever created that Figma clone could spend the next 6 months refining it, and a billion dollar company could stand to lose a lot. That's a real threat. And it's not just one guy out there working on a figma clone. The possibility space has changed drastically with Claude Opus 4.6. If you haven't been using Claude since the beginning and haven't been tracking its capabilities, you would have to defer to your intuition or some talking head about what's possible. The only way to know what's possible is to actually use it and judge it from there. Most of the the failure modes that existed 6 months ago don't exist anymore, which you could only know by using it.
  7. I have a similar dynamic with my mom. Me and my mom are just so cognitively incompatible in a fundamental way. The things I want to talk about, she has zero interest in and vice versa. I'll mention something about an underlying pattern or mechanism and she goes silent. Then, she brings up some inconsequential thing she noticed in the environment and I just can't stick with it for long. I concluded that all you can do is not give into resentment, drop expectations, and don't expect that person to provide social fulfillment. Sucks when it's your mom, but it is what it is. My mom does the same thing with not wanting to recognize the value of my advice. This actually fits the profile of the ISTJ. They don't track who is the smartest or even who has the best record of being right - they track who has the most standing. Seniority, credentials, age, etc. They're oriented towards what they think is "established order". They don't evaluate ideas on merit, but rather on who is saying them. Also, I noticed in my mom there's a defensive mechanism involved in ignoring me. If a neighbor gives good advice and she follows it, it costs her nothing. But if her son gives good advice and she follows it, that involves subordinating herself to someone she feels should be below her in the hierarchy. This has been really frustrating to deal with, especially in critical moments involving her health. It's a very frustrating incompatibility. You're not alone.
  8. All good! 100%. You'd have to be very careful in that scenario.
  9. Of course you need a human to manage things. My entire comment made that clear. But there are many people with little technical knowledge making money from AI-coded solutions right now. Just because you aren't doing it doesn't mean it's not happening. The video I posted shows a 10-year SWE transitioning from "not letting AI write any of my code" to "AI writes most of my code and has 10x'd my output". Watch that video and let me know what you think.
  10. Someone just created a Figma clone in a weekend, supposedly. If you're not familiar with Figma, it's a highly complex UI design tool. I doubt it's production ready, but even to get a working prototype of Figma in a weekend is already crazy. If that can be done in a weekend, just imagine what you can do in 1-3 or even 6 months. It's all about systems. The more the AI has to think about every minute detail (clutters its context), the more it will err. So you have to solve structural/foundational problems early and lay building blocks/patterns to separate concerns. For example, build all your user interface components first so the AI doesn't have to process frontend and backend at the same time. Then, impose architectural rules for how things get wired up. Then, have a QA process in place to check for errors + a pre-mortem. There's a lot more to it than this but I hope you get the point. You wouldn't have the AI build a UI component on the fly - the UI component should already exist so CC can simply place it where it needs to go without much thought, allowing it to focus on architecture, state, and wiring stuff up. It's on the developer to be able to spot the situations in which CC might err, which only comes with experience. Being able to read it and track what it's doing gives you a massive edge. "Vibe coding" fails because it's approach mixes concerns every step of the way. But if you have a full system in place and the AI is clear on all the rules, and the implementation is intelligently tested and coordinated, you most certainly can build complex software in this way, and it is being done. One thing you might not be aware of is you can have multiple agents running in parallel, all working on different features via git worktrees. This requires strategic planning, but it's how the Figma clone was built in a weekend. It takes intelligence and strategy to be able to orchestrate this, but this is now possible. I can't speak to how good it would be for gaming projects, but here's a video I saw the other day from a game developer who said it made him 10x more productive, and he has the receipts. I think you'd change your stance about AI coding agents if you actually dove deep into it.
  11. One thing a lot of people miss is the fact that Claude Code can now be used to successfully replicate highly complex software over the course of a weekend. This is huge because the bar has been lowered such that this is now a viable survival strategy for at least hundreds of millions. Competition will skyrocket and innovation/novel solutions will come at a pace we've never seen before. Also, Claude Code is nowhere near as unreliable as people think. "Vibe coding" will always fail. You have to impose intelligent structure onto the AI - define tight constraints, set architectural rules, have it review, keep track of failures and document them, setup and run test scripts, manage it's context. Aside from that, you have to think like a project manager, know how to use tools like git. If CC goes wrong, it's far more often the case that I mismanaged it rather than it just screwed up. When it makes a mistake, the first thing I ask is "what did I do wrong?". If you just crack it open and tell it you want X and if X is complex, then yes, the results won't be great. But if you learn the shit out of the tool first and how to guide it, you can build insanely complex stuff. Those who know how to get the most out of AI will be the ones who benefit the most from this new age, but it feels like the window will close relatively fast, so now is the time to get on it.
  12. 😂 It would be interesting to get a comprehensive, accurate account of Actualized.org's evolution.
  13. lol, yeah, that's a fun yet fucked up situation. Same thing happened to me and my friend except with LSD. We walked into a restaurant and sat down. I looked at the wall which had pictures spaced out every 8ft or so, and every single one of them was crooked as hell, all down the line, and I just bursted out laughing and couldn't stop. My friend asked me what I was laughing at and I pointed to the pictures and then he bursted out. We couldn't stop for like 5 minutes. Thankfully, our order had already been taken, lol. We were able to get it under control by the time our food arrived.
  14. It's like alpha male chimps jockeying for position. 😂
  15. Maybe I'm not attune to the social dynamics, but the way I see it, peers can disagree and call each other bullshitters without it being a big deal. It's just like saying "you're full of shit bro", and it not being an attack. But maybe I'm not attune to the dynamics. Also, this topic has gone stale. We're free to be human, lol, relax.
  16. What do you mean by "getting used to Leo"? Please don't let the below derail from answering this question. I'm curious what you mean by "getting used to Leo". It's only natural for people to be drawn here by Leo's wisdom and to then, after years, coming to have enough experience such that one feels confident in making their own assessments. People are entitled to their own opinions and own sensemaking. Why would this make you defensive? Do you believe Leo's wisdom is so high that the likes of Unborn Tao and Carl simply can't touch it and they should just default to intellectual subservience or assume they're missing something, even after years of serious work and experience here? Leo is a human bro. He is fallible. For instance, he doesn't seem to understand the mechanics of his own narcissism. Which is fine - I discover new things about my own ignorant ego seemingly on a quarterly basis. I can tell Leo is cool as shit. Funny as hell. But that doesn't mean he knows everything about everything and that none of us can correct him. In fact, Leo would get a ton more respect if he'd open himself up to that from time to time.
  17. I'll be honest - the only one I know of these is "too right", lol.
  18. Kavaris was out of line here at the end. It turned into personal attack. Cred is obviously in good faith willing to address any legit criticisms and is sticking to the subject matter.
  19. Do you know of any techniques that would be better than Tetris right after trauma?
  20. 😂I was wondering how tf have you never heard of that? Funny how that one didn't make it to the aussies. Surely not. Surely she knows.
  21. It's reminds me of watching a child who would otherwise be fine, start to cry because the adults think it just experienced something that warrants crying, and the adults basically manifest the crying. lol I've told my sister, no no, don't act like he should be crying now, because then he will. And of course, she acts like he should be, then he does.
  22. They say if you play the game Tetris for 7 minutes after you've experienced trauma, your trauma will be significantly reduced. I believe this. It's largely a matter of how the mind is given opportunity to amplify the experience, whether that be through self-rumination, cultural narrative, or whatever.