Hybrid minivans are probably at the apex of people hauling right now. If you’re the kind of person who isn’t into crossovers, you can buy a minivan that’ll get over 30 mpg with the whole crew onboard! That’s honestly pretty awesome. Also neat is a feature that your hybrid might have that you might not know about.
Today, Matt Hardigree wrote about how he got 38 mpg with a Kia Carnival, which is fantastic. Olesam shared a neat hybrid trick:


Fun trick I learned on our Pacifica Hybrid that might apply to other manufacturers too: if you open the hood with the vehicle on (or start it with the hood open) the engine will turn on! It’s apparently a safety feature; I guess if the engine’s off you might let your guard down and stick your hand into a dangerous area (fan or belt) that might cause an, uh, uncomfortable interference issue were the engine to suddenly turn on.
I combed through the Chrysler Pacifica forums, and yep, it is a real feature! Through some quick searching, it seems like this is a pretty common hybrid car feature. The Chevy Volt has it, as does the Mazda CX-90. Conversely, there are some hybrids that shut down their engines when the hood is opened, like the Ford Maverick and the BMW X4 M40i. I think this warrants more research, because it’s fascinating!
David says he’s going to build his new WWII Jeep in his driveway. Readers feel his pain and offer an excellent tip. Canopysaurus:
My tip: Harbor Freight sells portable garages (read tent) with closable front and rear panels for about $170. One of these would easily hold an MB. I used a ShelterLogic storage tent myself big enough to hold a car. It’s still standing 13 years, three hurricanes, and countless thunderstorms later, not to mention relentless heat. So, I can vouch for the stability of these types of shelters. And when you’re done, collapse them down and dispose of them, or store for future use.
Finally, Matt wrote a Morning Dump about AI. TheDrunkenWrench has a great way to visualize it:
One of humanity’s biggest problems is that we LOVE to anthropomorphize things.
AI is just a “Yes, and” slot machine.
It doesn’t give you an answer to something, it gives you what it thinks an answer looks like.
If you ask for it to write code for a task, it just makes what it thinks it should look like.This is the slot machine part. You feed enough coins prompts into the digital one-armed Bandit and eventually you get a hit.
You can’t build reliable systems on machines that “learn”, cause at some point it’s just gonna feed ball bearings into the intake because it picked up a BS line somewhere about cleaning the cylinder walls.
Code is dumb and it needs clearly defined instruction sets that DO NOT CHANGE in order to have any level of reliability.
We didn’t teach a rock to think. We tricked everyone into thinking we taught a rock to think.
This will all blow up spectacularly.
Finally, G. K. told a hilarious anecdote about the Rover/Sterling 800 in Mark’s Shitbox Showdown:
Fun fact about the Rover/Sterling 800 series. As a show of faith, Honda allowed Rover to build the European-market Legend at its facility in Cowley, Oxfordshire, alongside the Rover version. However, Honda had no faith in the Brits’ ability to put the cars together correctly, and so set up a finishing line at its Swindon, Wiltshire plant to correct defects before sending the cars to dealers…and there were many defects.
Have a great evening, everyone!
(Top graphic: Chrysler)
So uhh if I shut down my engine because I forgot to add oil after changing it it will restart after I pop the hood. Interesting choice.
I like the Maverick idea more, hell just give us a red LED light under the bonnet.
We didn’t teach a rock to think, but it turns out to do things that people have explained a billion times already how to do, you don’t need to think, you just need to know how to look up those things and summarize them.
The generic “ai is bad” stuff is so 2023. I understand why people think it’s bad, and why people think it’s good, it’s time to move onto other blog templates.
My dude, if you ask ChatGPT-5 which states include the letter “R”, it will include Indiana, Illinois, Texas, Massachusetts, and/or Minnesota.
Hell, you can literally convince ChatGPT-5 that Oregon and Vermont do not have an “R” in their names.
On average, GPT models will still answer questions incorrectly 60% of the time.
AI is crap.
https://gizmodo.com/chatgpt-is-still-a-bullshit-machine-2000640488
https://futurism.com/study-ai-search-wrong
The problem is and will remain that LLMs hallucinate. Some do that a lot. Even with RAG and tool access, they can’t be trusted for either precision or recall. And they’re very good at putting together convincing text from prompts and context, and very suggestible, so it’s almost trivially easy to make them produce not just erroneous but dangerous content.
The problem with LLMs is the same problem with Tesla’s “full self-driving.” It’s very good and very impressive, and so people get complacent and just trust it to be right or do the right thing. But sometimes it fails, spectacularly, and when you build systems to make decisions or actually do things IRL based on input from an LLM (or FSD), very bad things happen. People die. That’s not an exaggeration.