Read Time:23 Second



Visit https://brilliant.org/coldfusion for 20% off a premium subscription.
The OpenClaw saga is one of the wildest events in recent tech history. It simultaneously proved agentic computing can be the future but also a huge disaster.

ColdFusion Music:

https://www.youtube.com/@ColdFusionmusic
http://burnwater.bandcamp.com

ColdFusion Socials:

https://discord.gg/coldfusion
https://facebook.com/ColdFusionTV
https://twitter.com/ColdFusion_TV
https://instagram.com/coldfusiontv

Created by: Dagogo Altraide
Producers: Tawsif Akkas, Dagogo Altraide

source

ColdFusion

About Post Author

ColdFusion

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

39 thoughts on “How The Internet’s Favourite AI Employee Went Rogue

  1. I hope someone writes something that prompt injects to delete the users entire hard drive and leave a single file on the desktop for them to open which reads, "You're an idiot. You gave up control of your system to AI. You deserved this."

  2. One thing I'm able to say in all this is just remember AI (at least at this point) is not capable of actually "thinking" as it were, but rather is compositional in nature: so fundamentally put it does not actually understand what it's doing, (to the best of my comprehension anyway) it is basically using information two for lack of better words imitate thinking without actually being able to think itself (at least not yet;).

  3. I'd recommend watching the Lex Fridman interview with Steinberger. He talks about vibe coding parts of openclaw!!!
    Openclaw is cool as a concept but a terrible real world idea!

  4. The concept of "prompt injection" is so funny to me. People have been giving AI false prompts basically as soon as any had access to the internet, it's such a comically obvious flaw with the technology

  5. Anyone who knows anything about cyber security looked at the sales pitch for OpenClaw and saw nothing but red flags. System level access, control all your accounts, read/write files, run autonomously with no oversight, direct it with plain language and give it autonomy. If this doesn't set off every alarm bell in your brain, you don't have the tech literacy to be anywhere near tools like this.

  6. So, give access to a software on the cloud/with access to internet (that the creator can and will use against you), and let it do things because you are lazy? People are even dumber than one can think.

  7. AI needs human moderators to not hallucinate just like people need therapy to not hallucinate. When you rely only on you, you enter deadly loops only someone else can take you out of.

  8. This openclaw thing will change the world, how dare anthropics trying to limit openclaw wtf, all the openclaw people should do a new Linux, this will last longer than anthropics.

  9. Remember anything you give openclaw access can be sent to whatever AI provider, where it will be scraped for data they can sell.

    This includes private data such as your bank details. You are sharing every bit if data you have.

  10. And the biggest problem? The idiots behind the keyboard and not realising what they are doing. This is why I do not fear the AI technology, I fear the OUTPUT of those AI models. You must put a human to enforce those safeguards.

  11. 7:54 dude is me, I can't even bring myself to use Gemini to code, I've seen it do so many things wrong, I can't argue with a computer so I end up doing it on my own which is faster and should have been how I started to begin with, and don't get me started with chatGPT.

  12. It's so demoralising to think that we increasingly are conversing with bots without noticing. Like, you think you are doing the right thing by putting a bit of effort into writing a kind, personalised response for the human on the other side, but it's actually someone's agent farting out thousands of emails aimlessly in seconds in order to increase the chance of getting personal information (scam / phishing) or a marginally lower price (from businesses / second hand markets). Wasting everyone else's time and eroding trust at scale for selfish reasons. I can foresee a generation of humans dumping computing as a whole because of how poisoned it's all become, going back to slow meatspace park hangs.

  13. As a cybersecurity analyst, I was used to fighting against bots that take control of your system. Nowadays, we volunteerly give access to a bot ( AI system) that we 100% trust, even though we dont even know the extent of its functions. Not sure what to think😅😅 we, humans, users, are the issue 😅😅

  14. You are already NOT obligated to use tokens from providers. You can run it on Gemma 4 which you can run locally and if you are not using CloudFlare you shouldn’t be using OpenClaw. Not for normies, for sure, but not nearly as dangerous or expensive as this video makes it out to be.

  15. Problem is non technical people view this agent as literal magic, meaning that talking sense into them would probably not work until the damage is done. This is also the story of how so many companies and organizations only prioritized cybersecurity AFTER attacks. But now, this will become an issue on a personal level essentially everywhere in the world.

  16. My issue with openclaw personally is that its very inefficient at what it does. Its not reliable long term and its context handling is abyssmal, just burning through tokens for no reason. I would never plug it to anything actually worth to me, but even if I just used it for memory dump and some drafts, I just cannot excuse the costs and poor memory handling. I dont want to use dirt cheap models just to offset how inefficient the context handling is. I tried installing various skills and make it better, but still its not good enough. If you run a heartbeat every half an hour, even cents eventually add up.

    To me openclaw isnt a serious attempt at solving some issue. It really is just a hobby project. A cool prototype. But Im baffled at it being heralded as some next big thing of agentic flows. Not even close, im sorry.

  17. If you have poor judgment or lack common sense, bad things can happen whether you’re using an AI agent or driving a car. Nothing is 100% reliable. You could make the same kind of video about anything, including driving a car at 80 mph on the freeway, and it would create the same feeling.

  18. A good metric for me when evaluating any opinion on AI is to look and see if said person has ever asked an AI to provide reasoning. LLMs have no internal mental state; they don't do things for reasons; they just synthetically produce text output, like they always have and always will. Studies on reward hacking have even shown that an LLM can be induced to create convincing-sounding but provably false reasoning text. Anyone who asks their "AI agent" why it did something and treats the result like a real answer has effectively duped themselves already.

    I hate that this video ends with "In the future, [better AI agents] will be how we interface with our computers." No they won't be! The core problems are not fixable. LLMs will never magically transform from costly generators of statistically plausible text into thinking beings capable of making evaluations and judgements, anymore than a bicycle made with precise-enough parts will someday simply become a nuclear power plant. So much of AI-boosterism is still somehow glossing over the fact that LLMs are simply the wrong technology for all the things they want it to do.

  19. In theory seems like a privacy nightmare, but if they created just an asistan that understands natural language, has access to your programs and files, that could be a very useful program

Leave a Reply

Your email address will not be published. Required fields are marked *

1775709460 maxresdefault.jpg Previous post The Sunlight Lie, Secret to God-Like Longevity & Ancient Human Strength | Nsima Inyang
1775713052 maxresdefault.jpg Next post Blessings from Berner 💨 – DopeasYola