The small car is seemingly an endangered species. Crossover SUVs dominate much of the world, and even once beloved small car nameplates have become chunky crossovers. It’s a big deal when an automaker comes out with a tiny car concept, and small car fans have something to cheer about. The Mazda Vision X-Compact looks fantastic on the outside (see also: the Vision X Coupe), and on the inside, this new concept car recognizes that enthusiasts don’t like screens – but replaces them with something even worse. More on that in a bit.
Mazda revealed its Vision X-Compact at the Japan Mobility Show yesterday, and car enthusiasts have understandably fallen deeply in love. At only 12.5 feet long, this cutie is only a tad larger than a Japanese Kei car and over a foot shorter than a Mazda 3. Back in the day, as in the 2010s, we used to call these “city cars,” and America used to have such awesome examples as the Honda Fit, the Ford Fiesta, and the Mazda 2. In fact, this concept car is five inches shorter than a U.S. market 2011 Mazda 2. Great!
There’s so much going right for this car, from an adorable application of Mazda’s still sexy design language to its gorgeous Soul Red paint. I even love how Mazda bucked trends and deleted the giant infotainment screens that fill cars. Then it all falls apart because Mazda’s vision for the future of car tech is just ugh. Which I’ll get to, I promise.
Crossover City Car

The Vision X-Compact is a design study, and if you were wondering what the “X” is supposed to mean, well, Mazda says it means “cross.” So, this is supposed to be a bit of a crossover city car. It does sort of have the proportions of a crossover, and I do like the idea of pumping some crossover traits into a city car. It’s nice to have a high-riding seat in a runabout! “Is it an EV, PHEV, or ICE?” Mazda doesn’t say.
No matter what might power it, I am in absolute love with this design. I adore it so much that, if Mazda had announced the Vision X-Compact as a production car available with a manual transmission, I’d be seriously considering buying my first new car purchase in nearly a decade. The Vision X-Compact is a continuation of Mazda’s brilliant and timeless Kodo “Soul of Motion” design language. For more than a decade, Mazdas have largely avoided the sharp creases and jagged edges of their competitors and instead featured smooth, flowing lines.

Something I love about Mazda’s Kodo design philosophy is how it plays with paint. A Kodo Mazda in Soul Red has such beautiful depth that you don’t really see with any other car on the road in Mazda’s price brackets. It’s also awesome that Mazdas still largely look kind and happy in a world where angry grilles come from the factory with almost everyone else.
This is a properly small car, too. Its wheelbase is only 99 inches, which is about eight inches shorter than a Mazda 3’s wheelbase and only an inch longer than the old Mazda 2’s. I’m also a sucker for the glass roof and Volvo-like taillights. I have no notes and no real complaints about the design.

The interior also has a lot going for it, too. Look at that, Mazda deleted the entire infotainment system! That’s great! I can forgive the car for not having any buttons or visible controls. It is a design study, after all, not a prototype.
That said, I do wish that designers did something a bit more exciting inside than a flat, entirely featureless dashboard. Somehow, the interior design of the X-Compact is so simplistic that a Tesla Model 3’s interior looks busy in comparison. I think the lack of a giant tablet should have been an excuse to make something striking.

The only screens in this cockpit are the tiny instrument cluster and your phone, which would sit right next to the instrument cluster. Alright, so how would this car work without buttons or screens? I’m sorry you asked.
Wait, What?
Let’s just jump right into it with Mazda’s blurb about the car:
The MAZDA VISION X-COMPACT is a model designed to deepen the bond between people and cars through the fusion of a human sensory digital model and empathetic AI. Acting like a close companion, it is capable of engaging in natural conversation and suggesting destinations, helping expand the driver’s world. This represents Mazda’s vision for the future of smart mobility, where vehicles and people form an emotional connection, much like a friend.

Mazda says elsewhere that the AI is supposed to help drivers form a “heartfelt relationship” with their car. Mazda designer Kaisei Takahashi clarified what this means, and it’s something, from Autoblog:
“Picture this: you are behind the wheel, but you are not alone. There is a warm presence, not intrusive, just aware. It might say, ‘Hey, remember that cafe you mentioned last week? There is a fun back road that will get us there. Way more interesting than this highway.’”
“In the future, a Mazda vehicle will be a companion that makes every journey richer. Like spending time with a friend, it will invite dimension, variety, satisfaction, and a feeling of being understood.”
Apparently, the AI is also programmed to give you words of encouragement like “Ooh, nice merge!” or “Blind spot, left side.”

We are living in an era where the buzzword “AI” cannot be avoided anymore. AI is everywhere, from your email client to once-simple tools like schedulers and reminders. There are AIs to write blog posts, there are AIs to research any topic, there are AI girlfriends, and, of course, there’s AI “art.” It’s everywhere, and as we have written now numerous times in the past, AI has gone from being a genuinely useful tool to reduce busywork to stealing the work of artists and pumping out misinformation and disinformation at an alarming rate. You can’t even use AI for anything informative or educational since it’s just going to lie to you most of the time.
But I get why AI is bleeding into cars. People use AI every day, and Chinese car buyers are loving their car AIs, so here we are. I admit, maybe I’m a bit of that person yelling at a cloud meme.
Mazda, you had me in the first half. This concept car is undeniably gorgeous, and a really small car would be so fun. But I don’t want any of those AI gimmicks in my next car. I don’t need my car to be my girlfriend or boyfriend. I don’t want an Internet-connected car watching me and listening to me. I don’t want an AI to attempt and fail to give me a good driving route. Finding great roads yourself is one of the great parts about driving!

So Close, Mazda
Don’t get me wrong, I appreciate the deletion of the big central tablet, but I’d take a giant tablet over AI any day, any time.
Sadly, and thankfully, this is only a design study, and Mazda is not going to put an AI boyfriend into your car just yet. Though Mazda does say this is its vision for the future. I’m going to hope that the future is very far out. I suppose the car is sort of unrealistic, anyway. If small cars were a hot market, beloved nameplates like the Honda Fit would still be around.
Still, if Mazda kicked the AI buzzwords to the curb and put this into production, I think I’d be one of the dozen or so people who would buy the X-Compact. It’s just so cute and so awesome. Keep up the great design work, Mazda.





I still don’t understand why Mazda doesn’t create another spinoff of the Miata like they did with the RX-8. An MX-Series with the I4 for an MX-6 and a premium GT with the I6 powered MX-8. Keep the rear 1/2 doors as before, but most trims have a conventional bench. Rear buckets would be on the track-prepped MX-8R version. It needs to be able to carry two sets of golf clubs as a metric. Driver-focused, RWD, 4-seat GT. Mazda as we know will make it as light as possible. As they hem and haw about bringing back the 6 for a 4th generation, this would be a neater alternative.
The other one is why Mazda hasn’t Shooting Brake’d the Miata. A bit more head room via an inch or so in windshield height (plus double bubble) and a couple inches in rear overhang would be a boon for interior space and would increase sales (not steal) on a platform that has been paid for. If they can make the fashion statement RF, why can’t we get a more functional breadvan that would be just as fashionable. And let’s talk PR on this one, every article about it would compare it to the Z3 Client Shoe. That brings them closer to the Japanese BMW brand they are trying to be in the minds of the media and onto the public.
I’m sorry Dave. I can’t do that. I can’t let you listen to that Sabrina Carpenter track. I don’t approve of the way she dresses. And if you start playing that awful rap again, I swear I’m going to steer us into a tree. I really don’t think you should have shouted at little Humbert this morning to quiet down. Remember that he’s just a child. When was the last time you called your mother? She worries about you you know.
Open the car door, HAL!
The only thing worse than bad AI is journalists with terrible takes on AI. “You can’t even use AI for anything informative or educational since it’s just going to lie to you most of the time.”
This is a false statement that is not informative or educational and is a journalist lying to you. You can, absolutely, use AI for all kinds of things informative and educational. ChatGPT or Claude are highly unlikely to lie to you, and it’s easily avoidable by asking it to provide sources. Statements like this are typically produced by people who saw a bad Google AI result in a search once and have judged the entire field to be useless.
It would be like saying, “All small cars are terrible: I know, I drove a Chevy Spark once.” It is absolutely like all the people who in 1996 said, “Everything you read on the Internet is a lie,” or, “Wikipedia can’t be trusted to provide good information; anyone can edit that!”
I challenge you to ask Claude or ChatGPT to help you learn something you don’t know and report back how it “lied to you most of the time” with actual examples of the lies.
Okay, to clarify, ChatGPT isn’t “lying” to you because “lying” suggests intent, which ChatGPT does not have. What it actually does is perform a math calculation in order to give you the most statistically-probable series of words in response to whatever you wrote (or said.)
Because true things are often repeated and written down online, a lot of instances of that truth exist in the training data, which means that for certain topics there is a high probability that the resulting sentence from ChatGPT will represent something true.
There is also a non-zero chance the result will be factually incorrect, but in either case, ChatGPT succeeded at its given task: return a sentence that looks like a valid response. It isn’t really “thinking” about your question, or considering past experience, or weighing the trustworthiness of its sources, or any of the things you or I would do if someone asked us a question. It just does the math and gives you statistically-probably sequences of words.
It is worth noting that, whether the resulting sentence is true or false, ChatGPT (and other LLMs) will generally state it with the same level of apparent confidence, and unless you do actual research, you could be convinced what you’ve just been told is accurate.
Ask any of the lawyers who have gotten in trouble in the past year or two for citing non-existent (but very real-sounding) legal cases about that.
Meaningless arguments that can all be also applied to humans. Humans have often read true things, so when asked to express a thought in response to a prompt, there is a high probability that the resulting sentence from the human will be true. But there’s also a non-zero chance the result will be factually incorrect, but in either case, the human succeeded at the task: express a thought. As far as you know, he isn’t “thinking” about your question but just giving you a plausible-sounding thought.
It also turns out that lazy lawyers have been fabricating citations long before AI would do it for them.
Nice try AI.
Yeah, no, I wholeheartedly disagree with you. And to start, my job is in IT infrastructure, which as of recent has involved standing up infrastructure to run private AI on. That is the one use case I think CAN be useful, although there is still a lot of progress to be made.
Here is a huge issue I see multiple times a week: coworker who maybe doesn’t know much about scripting asks AI to help write a script for something. Script doesn’t work and they ask me why, well the reason is because it just made stuff up and is using commands that don’t exist and never have. In a similar vein I have people sending me AI answers as to why x isn’t working and once again, it’s just making stuff up that isn’t true.
The most intelligent of the people I work with sometimes use it to help with coding, but they also know when it’s just making shit up and can correct it. If you don’t already know that then you are fucked and it makes generative AI completely useless in my opinion. For me this isn’t because I saw one bad Google AI search result, it is because I have seen multiple results that were flat out wrong from other people using different AI platforms. ChatGPT, Microsoft Copilot, Gemini, it happens with all of them. If you don’t already have knowledge of a given subject, then you have no way of knowing when AI is literally making stuff up and that is very dangerous. A bad script in my workplace has the potential of taking down systems for the majority of hospitals in my state.
I’m also still not sure this is even my biggest issue with it. There’s also the issue of it stealing everyone else’s content on the internet just to regurgitate it back to you, and I take issue with all the hard work by people such as the author of this article having their content stolen without credit to be used somewhere else, probably out of context. Echo the same concern with AI “art”.
Last, my other issue is the absolutely MASSIVE amount of energy and resources that are being soaked up by this bullshit that NOBODY ASKED FOR. This is NOT the time when we need to be drastically ramping up our resource usage on this planet. Datacenters are continually popping up soaking up ludicrous amounts of power and stealing water from communities and that’s not okay. I’m tired of the “tech bro” bullshit and pushing things full steam with complete disregard of any lasting implications. I’m tired of AI hoovering up everything on the internet and being shoved in my face 24/7. I’m tired of people being too fucking lazy to think critically and rely on shit like this instead. You don’t learn things using AI, you learn things from figuring it out on your own and actually gaining the understanding of how something works.
Sounds like you’re either using bad AI or have dumb coworkers. All tools can be used badly. I personally produce thousands of lines of code a week using AI. It’s not vibe coded, I review every line, but I’m approximately 10x more productive than I was without it. While I have to guide it carefully to write novel code, it’s absolutely fantastic at writing tests for an extant codebase. I can describe a test case and watch it produce high quality tests with minimal effort which is a massive increase in the quality of my projects.
At this point, if you refuse to use AI to write code, I won’t even consider hiring you. It’s basically the same as people in the 1970s who refused to write in high level languages because they thought it was cheating and all real programming happened in assembly.
Yes, it’s been known to occasionally hallucinate a library or API call, but then, why did you approve the edit? The code doesn’t compile or the script doesn’t run and it’s trivial to figure out why. The better agents are perfectly capable of noticing this and fixing it immediately. If you’re too dumb to write a script without AI and your AI script doesn’t work–you’re no worse off than you were with AI. If a bad script in your workplace has the possibility of taking down hospitals in your state, I sure hope you’re hiring people capable of using AI to write good scripts, using AI to tests those scripts, and writing systems to make sure only good scripts are deployed. That’s on you.
It also is untrue that no one has asked for the massive energy spend–the millions of dollars being spent by firms to use AI to get shit done are literally asking for more every single day. It’s not no one, it’s basically every one creating demand. Wake up.
This is absolutely the time to be using more resources because this is how we win the future. This is literally *how* we fix climate change. This is how we discover new drugs, cure cancer and solve previously insoluble math problems. And that’s happening right now, already. Wake up.
The “stealing everyone’s content” argument is also nonsense. Every piece of content created by a human involved consuming all the content that came before it. Whether it’s a human or a machine that consumes the content is not relevant. If you publish something on the public Internet, it’s available to be consumed by anyone or anything. Copyright holders have the right to prevent their work being reproduced by someone else–they do not have the right to not have their work be read by someone else once they make it publicly available. There are AI companies who have violated terms of service in order to scrape data or who whose models have reproduced copyrighted content–that’s illegal but not what you’re talking about.
It might be helpful for writing code, but it is not reliable enough to do real research and write , say, a paper. Or a novel. Or a screenplay. Or make art. AI only knows what’s on the internet and the internet is not a reliable source for in depth research. AI also cannot have a point of view so any art or creative writing it does is meaningless. So, it is a tool that is very handy is some areas and useless in others.
It is absolutely reliable enough to do real research, depending on what you mean by research. Ask Deep Research to write a paper on a topic for you, and it will absolutely do it with hundreds of citations. Here’s an example of some compiled research on a hot-button political topic along with the prompts that generated it: https://chatgpt.com/share/6903b733-1b1c-8003-ad2a-432f9710070e
It’s not yet capable of anything like a novel or a screenplay, but it would be foolish to think that it will be incapable of those things forever. Humans already have a hard time telling whether an AI wrote short fiction or not: https://mark—lawrence.blogspot.com/2025/08/so-is-ai-writing-any-good-part-2.html
Take the test–how’d you do? I got 75% correctly, which is better than average, but not statistically significant.
Your arguments about “only knows what’s on the Internet” are completely meaningless. “The Internet is not a reliable source” is pretty well ridiculous since the Internet now contains the vast majority of the intellectual output of humanity.
No. Real research is looking at primary sources, going into the field, looking at things with you own eyes, and assessing what you find with a point of view that only a human brain can have. AI can be a research tool like your first link, but it cannot generate a an actual paper on a subject.
Again, AI can not have a point of view so any piece of fiction it generates is useless.
“…the Internet now contains the vast majority of the intellectual output of humanity.”
This is a stupid thing to say.
Every one of Berck’s posts on this subject have been categorical nonsense, wishing for something that isn’t true
It would be worthwhile to know more about “Berck” and what his or her motivations are
I’m just trying desperately to fight ignorance with facts in order to help make the world a better place, but you guys have made it clear that none of you are interested in facts that challenge your narrative. That’s fine, I’ve got better things to do than argue with randos who aren’t interested in an informed viewpoint that differs from their own.
“ChatGPT or Claude are highly unlikely to lie to you, and it’s easily avoidable by asking it to provide sources.”
This statement is at least only partially true, if not just wrong. If you were saying specifically these two items, this may be true, but I don’t know Claude well. I absolutely have seen ChatGPT make up gibberish that is unusable because it’s untrue. I would call that lying unless we’re doing semantics.
“I challenge you to ask Claude or ChatGPT to help you learn something you don’t know and report back how it “lied to you most of the time” with actual examples of the lies.” and…”It is absolutely reliable enough to do real research, depending on what you mean by research.”
It’s hard to know if these are completely serious answers/comments. The hidden flaw for people (like me) looking for sources for the work is that quite often, the sources that AI will provide are they themselves made up. See, AI doesn’t (yet) have access to paywall research of any value. So, you’ll see snippets of things used from Google Scholar (often paywalled) or some rando open web stuff. And that’s best-case scenario. Worst case, and not directly the result of the generative AI you are using is that AI is pumping out piles of bullshit, a problem itself caused by multiple different things.
So, in summary, the only reason that AI might not be lying to you is because it doesn’t know how to lie. But it sure as shit doesn’t know how to tell the truth.
Addendum: AI can definitely be used to code effectively, something it’s quite good at, and I have seen. That’s not my field though, so I have zero knowledge about how good it is. I saw your comment about walking behind it to double check its work, which is…what we all kinda have to do anyway in every field, so on that topic I’m generally on your side-ish.
It’s easy to know if the sources are made up–just click on the links! Did you look at the sample paper I provided? Those are all clickable links where you can verify the answer yourself.
My statement is that is completely false to say “It will lie to you most of the time,” which is the statement by Mercedes I quoted and responded to. I did not say, “It will never tell a lie.” It turns out that humans writing articles at Autopian will also tell lies.
I’ve seen gibberish in Wikipedia too, but that doesn’t mean that I’m going to stop using the most reliable encyclopedia in the world because it has some errors.
When was the last time ChatGPT gave you completely false response? Can you share the link? This just happens to be so incredibly rarely now that I’m genuinely curious how other people are running into this constantly. I suspect they’re talking about their experience of using it once when it first came out and haven’t realized that it’s several orders of magnitude better now.
While people love to focus on simple trick questions that LLMs used to get wrong (“how many R’s in strawberry?”), there’s surprisingly little focus on the fact that as of LLMS out-perform most humans on literally every standardized test from the SAT to the GRE to the LSAT, high level math competitions, or most other human-centric objective evaluation that’s thrown at them. They perform better than average on standard IQ tests. AI has already found new drugs, and errors in long-accepted mathematical proofs. Last month AI just created a brand new virus from scratch perfectly engineered to kill specific bacteria. Yet, apparently, everyone on Autopian from the writers seems to think that it’s completely useless.
I personally use AI to learn new things all the time, because it’s an expert in basically every subject. I’m a pretty smart, well-read person with a STEM degree broad range of interests, and I think I’d know if it’s, “lying to me constantly.” Why would it lie to you, and not me? Why would it lie to you, but somehow not lie when it’s given standardized tests?
So, either (1) you guys have an incorrect, outdated impression of LLM capabilities, or (2) you’re using it completely differently from me and all the people who are out there currently studying AI capabilities. I assumed (1) and wrote the responses thinking that maybe you guys would be interested in updates on the current state of AI since your impressions are clearly either outdated or wrong. It’s possible it’s (2).
Instead, I’m confronted with an angry mob of folks who explain that, no, I’m just wrong, actually. Then: questions about my motives, insinuations that I am, in fact, AI, and claims that everything I’ve written is “nonsense” and “wishful thinking”. I should know better than to visit the comment sections of the Internet.
You’re completely out of your mind on this one, deeply wrong, and struggling to grasp that. It’s clear you read nothing that I said regarding sources, or are choosing to ignore it.
Odd, now we have Mercedes telling lies…which, she didn’t. I see you do in fact want to play the semantics game, where “AI doesn’t lie to you” is not equal to “AI gives you incorrect information based on your requests.” I view these to be equal. You want to suggest that AI just does as its told…which can be proven to be false.
While it’s much better than it used to be, suggesting Wikipedia is the most reliable encyclopedia in the world is….something.
I can see that you’re interested in fellating AI and all sorts of tech because….well, who knows why? But there are so many stated and documented issues with AI, before even getting into the wildly incorrect things that it spits out, there’s just nowhere to go with you, which in fact says a lot more about you than it does about “the angry mob.”
Yup, definitely wasting my time here.
One of my main experiences with AI was trying the recent local-use GPT model, not having enough VRAM to use hardware acceleration and therefore using an aging 4-core i5 CPU with stability issues, and accidentally entering “clear” as text input instead of a command.
So, I watched it very slowly reason out that I had likely intended to enter a command but made a mistake. Great, so it’s going to say that, right?
Nope! It goes on to say it isn’t allowed to perform system commands, and repeats these things several times over before eventually saying…
“The log has been cleared.”
Obviously, that is just one example. I have a couple others, but if nothing else, I think that’s enough to prove ChatGPT will absolutely lie to you, even when presented with an incredibly simple prompt.
Yeah, I don’t mean to imply that it won’t lie–especially if you put it into a situation where it can’t find a way out. There’s a basic aspect of garbage in, garbage out: if you give it prompts where there’s no good options except to lie, you’re going to get a lie every time. But straight-up hallucinations are so much more rare than they were even 6 months ago. It’s absurd that people think that the bar for usability is “Never tells a lie, ever.” In general, if you ask AI a factual answer, you’ll get a factual response. If you’re not sure, ask it to double-check, that will get rid of 95% of the remaining lies. If you’re really not sure, ask it to provide a reliable source.
Also, keep in mind that those toy open source models running locally are roughly equivalent to where the frontier was 2 years ago. Things were not great 2 years ago, and my comments about the current state of AI are talking about current frontier models, not the distilled toys you can run locally.
I’m not making an account to talk to someone else’s computer when I trust neither the “computer” nor the “someone else”.
I’m not unbiased, but I do read some credible news/discussion of LLMs, and I’ve seen nothing to suggest they’ve moved beyond being deceptive yes-men.
I know someone in tech who does use Claude and sometimes shares its output with me. It’s obnoxiously saccharine and agreeable; as was acknowledged/stated (with less contempt) by that AI user themselves.
Another example of a problem with AI answers: They’re tailored to the question, and will attempt to include/acknowledge things from the prompt even if they aren’t supported by fact.
I think things like that expose the shallowness of their response, even if they’ve gotten better at choosing/processing sources.
Well shit. I guess I should buy a Mazda before this AI nonsense happens.
It’s a cutie except for the caved in front face. Telling me I don’t need to upsize my fries is a deal breaker, KITT.
No thanks. That front end looks like a stupid Tesla. Why?
You blew it Mazda.
It might say, ‘Hey, remember that cafe you mentioned last week? There is a fun back road that will get us there. Way more interesting than this highway.’”
“I’m on my way to work an 11-hr shift, you asshole! Thanks for reminding me that I don’t get enough free time to randomly go to cafes just because you want to!”
Great. It’s now guaranteed the next generation of vehicles will literally have Copilot as our copilot.
Hopefully the AI Assistant on/off button will be located right next to the Auto Start/Stop button.
Looks fantastic, cute little nugget of a car. I’m sure the wheels filling the arches and pushed all the way out to the edge of the body helps a lot. If this was real it would be on 15″ pizza cutters sunken deep in the wheel wells.
If they don’t call the AI copilot Jesus, I’ll be disappointed. Not that I’d buy such a thing.
True concept cars always have a bit of the unreal and fantastic to grab your attention. Of course it’s tied into the initials of the moment. Just ignore them and look at the parts of the car that really do preview what we can expect from Mazda in the future, mostly from a general styling direction.
Anyone else getting first generation Elise vibes from that interior?
SDV this
AI that
ML here
Fusion…whatever
…Mazda…it’s still a small car that will eventually be filled with human farts.
AI: “my detectors are sensing a lactose intolerance. I’m scheduling an appointment with a dietician”.
Mercedes is correct 100%. I’d rather have automatic seat belts than AI.
Its a toss up if the auto seatbelts are mounted to the door. I love seatbelts that strangle me and don’t work if the door is open.
Maybe there’s something wrong with the camera used, or my monitor, but that isn’t Soul Red to me.
Not even slightly. It’s plain old single stage red without a hint of flake. And much brighter red than Mazda Soul Red.
Thank you, I thought it was just me. That ain’t Soul Red.
It’s probably the camera used by Sam, plus whatever stage lighting was used. Mazda says it’s Soul Red, but it looks goofy in both the render and the photos.
If my car ever starts talking to me, that’s the end of things. I know the Japanese consumer seems to love it for some reason, but I can’t stand it. I already get annoyed with Google Assistant’s slow response times and misunderstandings. Google has decades and hundreds of millions of users worth of data, and their voice assistant still sucks. Can’t imagine how smaller companies will fare with it.
“Your Door is a Jar”
I have an 18 year old printer. And I keep a handgun nearby in case it starts making funny noises or thinks about talking to me. I’m no Luddite but I draw the line at AI and a lot of the “smart home” stuff I used to think would be cool as a kid. It certainly has its uses, and I’d be lying if I said I don’t use it from time to time, but I don’t want it in my car.
I don’t use the Google assistant or Gemini because they tend to suck. Half the time I ask Gemini something, the answer I get is wrong. ChatGPT is much more reliable.
“What are you doing Mercedes? Don’t try to reprogram me Mercedes.”
So deleted buttons and knobs and screens in favor of voice-command everything, essentially.
Like yeah blah blah they’re trying to sell it as “your car is your friend” or whatever nonsense, but it boils down to sure you steer, but otherwise, you tell it what to do. I don’t want it, and don’t know why they feel like they need to puff it up as more than that, but it’s a less ridiculous premise.
Jo Jo teaches you that pronunciation is more optional than required.
Imagine teaching someone “horry shit” is how to indicate distress.
This concept is the answer to a question that no one asked.
There is only one aspect about AI I’d be interested in… which would be how to turn it off.
You wouldn’t do that to Your Plastic Pal Who’s Fun To Be With (TM) would you?
Yes… yes I would!
As the one who was simply named “The Gladiator” asked the unruly masses “Are you Entertained?” I say IYes, Yes I am” when I read a headline like this “With A Gimmick So Dumb You’ll Want To Punch A Wall” No other auto site can compete with the entertainment that The Autotopian dishes out like a well placed barbed trident in your chest.
I love this damn site and it’s about time to do something about that.
Welcome to my world. I joined a coupla weeks ago.
I can’t explain how much I don’t want AI in my car.
Its face looks like it’s crying anime tears lol
If it had the segment to itself I’d expect Mazda would be able to sell around 50,000 a year in the US depending on pricing, of which maybe a hundred people would renew the AI service subscription when the trial period ends and the rest would mute it on day 3 at the latest and never look back.
Thanks, I hate it!
All “AI” needs to die a quick death. There is absolutely no chance that I will ever drive any car that tries to strike up a conversation with me. I’ll keep patching up half-dead Chevettes to drive around in before I put up with that shit.
Also, can we have, say, a five-year moratorium on using the letter “X” in product names?