It was over two years ago that I first wrote about the emergence of the generative image AI Midjourney as it applied to the car design process. My thoughts back then were it was nothing more than a picture slop machine for the lazy, cheap and talentless. And so it has proved. I’ve been blocked by more than one automotive company on social media for calling out their use of AI to create poor images when they would have been better served by commissioning an actual human to do real, paid, creative work.
I want to state for the record up front that I’m not a luddite. I’ll be 52 in a couple of months so clearly remember life before a computer in every home and the invention of the internet. I’ve witnessed mobile phones progressing from a bonce frying handset attached to a car battery that goes flat in two hours to a slim metal and glass rectangle that is now an indispensable part of our lives. I buy nearly all my music digitally because if I didn’t I’d be living in a house made entirely of CD jewel cases. Technology in vehicles has given us better economy, less emissions, more power, fewer breakdowns and safer structures. I’m onboard with these quality of life improvements.


But technology and the investor bros wait for no one as they get out of bed at 4am to begin their day by writing bullshit self-affirmations in an expensive notebook and dunking their inhuman faces into a bowl of ice water. Then it’s on to enriching their lives and enshittifying ours by forcing AI into every aspect of the world around us. I can’t even open a PDF in Acrobat now without an ‘AI Assistant’ icon popping up in the corner asking me if I want it to summarize the document for me. No fucking thank you, because I have a somewhat functioning brain that has learnt to read and comprehend. Did we really kill Clippy for him to return Terminator like, stronger and smarter, twenty years later?
AI Whack-A-Mole
Despite having Siri turned off as far as possible on my iPhone it still popped up uninvited a couple of times on a road trip last week. Why do I have Siri turned off? Because every time I’ve tried to use it for voice control of my phone while driving it’s been useless to the point of being extremely annoying – unable to do something as simple as playing the correct music track. To avenge Clippy, Microsoft now forces users of Windows 11 to have its AI assistant Co-Pilot on their machines. And because of the way Windows structures its updates to be mandatory there’s nothing you or I can do about it. Everything is AI this, AI that – are we now so lazy as a species that thinking and understanding are too much effort? I feel like I’m playing AI whack-a-mole.
My slightly more serious thoughts two years ago about how AI might end up infecting the car design process is that like life, technology would find a way and it wouldn’t be what we were expecting. An implementation I considered was something like a cloud based tool for solving 3D modeling patch layout problems, which could then be rebuilt by a human CAD jockey. What we’re getting currently and is being heavily promoted across car design social media is AI technology to turn your quick sketches into full renders. Creating fully detailed and colorized images from your linework using visual and word based prompting and then using those for further design iterations. But before I get into examining how these new tools work and whether they are any use or not, I want to explain how designers currently operate, so you can understand exactly what AI might contribute to the process – or if you want to sound like a professional designer – their workflow.
What normally happens at the start of a new project is a manager will gather all the keen young (and one older and cantankerous black clad one) designers together and tell them what they want. It might just be a facelift, the next version of a current car or something completely new. For a car that’s currently in production this will take place about a year or so after launch as it gives the marketing department time to solicit feedback from customers (and crucially non-customers). Does the next version need to be more aggressive? Sportier? Does it need to tie into the rest of the range more closely? Is it too close to another car and needs more visual separation? It’s not meant to be too prescriptive, so the designers still have a lot of freedom to implement their own thoughts and ideas. There will be a deadline, say the following week when the designers will be expected to present their work and argue convincingly for their design in front of the chief.
Why Photoshop Is Such An Important Tool
Some designers like to sketch straight into Photoshop using a Wacom graphic tablet. My preference was to always start on paper using just the trusty Bic Crystal ballpoint. Working with a pen and paper forces you to commit and keeps things nice and simple when you are just trying to get ideas out. I did this so I wouldn’t be tempted to start fancying up my sketches too much at this early stage. It’s too easy when using Photoshop to delete things so you end up redoing stuff over and over until you’re completely satisfied. Then after a few days when I had something I liked I would scan them into Photoshop to be worked up into full color renders complete with details like wheels, light graphics, trim pieces and so on that would be suitable for presentation.
The benefit of working this way is that you don’t waste time on something you are not totally committed to. An analogy I like to use is you wouldn’t write an essay or a short story without a few notes to guide you. Movie directors use storyboards to figure out shots and story beats before hitting the big red record button on the camera. It doesn’t matter if you are doing quick line sketches on paper or in Photoshop, keeping it simple allows you to generate loads of ideas so you can understand what works and fits the brief. What’s more, simple sketches can be incredibly expressive on their own. Massimo (Frascella) was very keen on seeing our initial loose, scrappy ten minute thumbnails because he wanted to see our ‘working out’. It’s always possible the chief will like something the designer themselves doesn’t. This is another reason I always encourage my students to have lots of nice sketches in the portfolios – not just splashy full color Photoshop renders, because it lets me see their thinking and how they arrived at their preferred design.
If you are not familiar with Photoshop, put simply it uses a system of layers. Everything on a layer is editable without it affecting the layers above or below it unless you tell it to. What this means is you start off with the scan of your ballpoint sketch and then build a complete image using layers on top of this. I usually block out the main parts of a car using color first – dark gray for the wheels, body color for the body, a lighter gray for the glazing, darker for the interior and so on. Once this is done I group those layers into a folder and create a new group for the core shading – that is very simple lighter and darker areas that define the shape of the car. I repeat this with another group for the shadows and highlights, and then a final group for the details: light graphics, wheels (which I usually just grab from stock), sidewall graphics for the tires, badges and logos and so on. Finally I create a group of layers at the bottom of the stack for the background. This is a very simplified description but it gives you an idea. A basic render might end up with 20-25 layers. An involved and complicated one with lots of highlights and shadows might run to 50 or 60. But importantly all the layers remain editable. I once did a front graphic that Massimo really liked – he asked me to do another ten or so versions of it, which was an afternoon’s work to churn out ten identical versions of my design with different front ends. Pay attention to all of this concept of editable layers and how renders are created because it’s one of the areas that AI render generation is lacking.



It’s important to understand the methodology here is the same as it has always been since Harley Earl set up GM’s ‘Art & Color’ section in the twenties. Only the tools that have changed. With pencil, pastel and paper sketches you would draw ten different front ends, cut them out and then mount them over the original complete image. Working digitally is more flexible, quicker and cleaner because you’re not faffing about with scalpels and spray mount. Now you have a basic grasp of how the sketching and rendering part of the design process works, let’s dive into the murky world of AI render generation and try to understand how designers might benefit from it.
What Vizcom Is And How It Works
The most prominent of the current AI design tools is Vizcom (from Viscom, short for VISual COMmunication). Vizcom is a web based AI tool that based on the pitch, can instantly turn your quick sketch into a fully realized, professional standard render, without having to grapple with all that tedious Photoshop work. I’ll get into the background of the company itself in a bit, but it’s enough to know that this is one of the main angel investor backed AI start ups looking to establish itself in the industrial design field. It pops up a lot on my Instagram feed. Freelance car designer and online tutor Berk Kaplan promotes it on his YouTube channel, and Vizcom have also hired well known concept artist and car designer Scott Robertson to feature in their videos. I signed up for the free feature limited version, and like the previous article I’ll use my EV Hot Rod project as a baseline because I knew what I wanted when I designed that car, so it will provide a useful basis for comparison. Below is the sketch I uploaded into Vizcom.

You start off by uploading your sketch into a sort of simplistic Photoshop type webpage, although there are basic pen tools if you want to draw straight into the software. There’s a layers panel on the left, some editing tools at the top and on the right are the settings for image generation. By altering the generation settings, you can quickly produce rendered images of your initial sketch and add them to the layers panel. What happens is once you have a selection of rendered iterations on separate layers, you then output them to the project whiteboard which Vizcom calls the workbench. The idea is they are all in one place for refinement and collaboration. This, as I frustratingly discovered is fucking stupid. The problem is there is no way to go back to the original image generation editor you just left, with all your images on their individual layers. You can only double click on each individual image to edit or refine them separately. There’s no way to easily compare your sketch with the AI renders by simply turning the layers on and off. Once you leave that first screen, that ability is lost. Let’s try again.

The Results
This time I uploaded my pen sketch into the workbench, double clicked it to enter the editor and typed ‘matt black hot rod’ into the prompt box. Underneath this, you have three further option boxes. The first is for palette and sets the style of render you want. There are a few options here for different looks to your generated images: you can go for something more realistic, a flat cel-shaded look or more importantly for our purposes specific automotive exterior and interior rendering styles. There’s a slider to tell Vizcom how much the chosen style should influence the final image – I left it at 100%.
Here’s the result of using the Vizcom General shader. As you can see it has stuck closly to my sketch in terms of coloring in inside the lines, but everything else is shit. Both sides of the front suspension are different, the lower right hand side wishbone appears to contain a headlight, the wheels are shit, and it’s grown a television antenna from the rear fender. And it has totally ignored my prompt.
I tried again, using the automotive exterior shader. Vizcom has interpreted the linework better, but it thinks the tires are part of the bodywork as it has rendered them in the same way, instead of black.
This is using the Cyber Cel flat shader, a style meant to replicate high contrast concept art with simple color changes to illustrate lighter and darker areas. We have half a windsheild, odd orange pieces added at random and the roof release handles in turquoise. Not what I would have chosen. At least it’s paying attention to my text prompt this time.
This was created using no verbal prompts and the Volume Render shader. My hot rod appears to have gained organic components I didn’t know about.
This time I set the shader to Realistic Product. This is probably the best result I got, but it still needs a lot of work. Again the suspension arms are different on each side, the wheels are far too bitty and the rear left hand suspension arm is now invisible.
At the bottom there’s a drawing influence slider – how much should Vizcom stick to your original drawing. If you set it too 100% it should follow your line work as faithfully as possible. The lower you set the influence, the more freedom Vizcom has to go outside your sketch and do its own thing. So I turned it down to 0% to see what happened.
Well that’s not really a hot rod at all.
What this shows is when Vizcom doens’t have to follow your drawing, it can create that is much more convincing as a car. That’s not surprising because it’s trained on images of cars. But when it has to render something original that is outside its training data set it struggles to make it realistic.
These are pretty much all unusable garbage as you can see. But there is also the option to add a reference image. I can’t show the images I used for copyright reasons (because I just grabbed them off the internet as a student and we’re slightly more law-abiding than that here) but one was a black and white image of an old Ford and one was a monotone faceted sculpture of a hot rod. Here’s what came out.
Absolute visual bobbins.
Here is one of my original renders that I did in Photoshop at the time and yes I know it’s a slightly different view but that’s all I could find seeing it was done over a decade and two laptops ago. The key to using any tool effectively is proficiency and understanding how to get the best from it. However even in one of the videos I watched Vizcom was rendering tires in painted metal, consistently couldn’t get wheels right and gave bodywork all sorts of weird non-symmetrical openings. I spent most of Wednesday getting to grips with Vizcom and it does have simple tools that allow you to isolate and re-generate parts of an image. So with a bit of time and effort it would be possible to get slightly better results than I managed in service of writing this piece. But nothing I got was remotely usable – they all need extensive editing. And remember when I described how I used Photoshop by building up an image in lots of layers? It gets a lot more difficult when you are trying to edit an image with everything flattened into one layer. Vizcom tutorial videos I watched for research specifically mentioned doing this as part of using it effectively, so at which point we must ask ourselves what is the point?
Trying To Shortcut The Hard Work Is A Fool’s Errand
I understand the temptation to offload the creation of renders to AI. When I was studying, rendering was one of the hardest things I had to learn. I bought multiple books and video tutorials including some by the previously mentioned Scott Robertson. As my own techniques improved I always thought he overly complicated things and found his methods too prescriptive, but the reason it’s so difficult is because cars are extremely complicated shapes with a lot of individual elements and materials. It took me years of practice to be able to replicate the paintwork of a car with realistic reflections, highlights and shadows. Understanding how light affects matt and gloss surfaces and to give the impression of transparent surfaces.
The Photoshop keyboard shortcuts are now hardwired into my brain to such an extent that using different image software like Sketchbook on my iPad creates a dissonance in my head. I don’t often have the time or need to do Photoshop renders currently so I’m a bit rusty but when I was doing them regularly of a car I was familiar with like the Defender, I could knock out a compelling image in probably three or four hours. Which to reiterate what I said at the top of the piece, is why you would only do them when you had a design you were happy with. For figuring out your design they are simply too time consuming.
Midjourney, the image generator I played around with two years ago worked off a Discord server (like the one our lovely members have access to so they can chat to us. So cough up). You gave an AI chatbot some typed prompts, and a few seconds later it would give you back a few images. The results were less than impressive and very unpredictable. This is how I described it in that original piece:
“But it’s a scattergun approach—you never know what you’re going to get. In a worst-case “let’s sack all the designers” scenario you could imagine the correct prompts could become a closely guarded corporate secret like the formula to Coca-Cola, but it’s not likely because the results are too variable, too subject to the unseen whims of lines of code masquerading as something it isn’t.”
The problem with Vizcom and the new generation of AI image creations tools are no different. They’ve been given a more usable interface and because it bases its images on one of your own sketches you have more control over the starting point, but the dice-roll of what you get out remains. I learnt to render by studying other people’s work and attempting to copy it, so I could understand the techniques they used. Once I knew what I was doing I could use that knowledge of brushes, vector masks, adjustment layers and other Photoshop tools to apply to my own images with a personal style naturally growing out of that learning process. If on a basic level you do not understand how to create realistic images for presentation, how are you going to know if Vizcom has produced something correct or not?
It’s Not Just About Making Nice Pictures
Critics or tech bros will say you do not need to understand something to know if it’s good or not – the old you don’t need to know how to make a film or write a piece of music to appreciate it argument. You don’t need an in-depth knowledge of arcane construction codes to enjoy a particularly good building. The flaw in this logic in these examples is you are consuming not creating. The purpose of nice rendered images is to sell your design to your manager and to give CAD modelers a reference to build a 3D version of your idea. If you don’t understand how surfaces interact with each other create the shape of your design how the hell are you going to sit next to an Alias or clay modeler and explain to them what you want? This is the issue with Vizcom. It doesn’t understand because it doesn’t have the ability to.
Imagine if you had a robot butler. You’d ask it to make you pasta sauce for dinner. It would go and read a million different recipes, collect loads of different ingredients and throw them all in a big pot. The resulting reddish mess would resemble pasta sauce, but the robot is not capable of realizing that is only one part of making a tasty dinner. Preparation, selection of good ingredients, building up slowly from a tomato base and then refining through constant tasting are alien concepts that are more important than just chucking shit in a pan and hoping for the best. You can’t easily separate the robot pasta slop back to its constituent ingredients to fix it either. Yet this is how AI image generation currently works. All it does is scrape millions of images of cars and text data pairs and slop the average over the bones of your line work. You’ve got try and unpick and edit the result to correct the faults inherent in such a simplistic approach.
Unfortunate but Vizcom is built off Stable Diffusion, as stated by Vizcom CEO in public discord (dev team in blue). Vizcom claims to not allow use of artists names (unsure how realistic it is). Yet this is easily bypassed as models are still built and rely on artist’s work. ???? pic.twitter.com/AX8vTLaZ5S
— Karla Ortiz (@kortizart) March 2, 2024
So while Vizcom is being sold and promoted as a time saving tool, I’m not convinced it’s anything more than the latest Silicon Valley tech fad. The CEO ironically had a past career as a car designer. Now as a co-founder there’s been the usual Forbes puff piece detailing the rise from eating Costco hot dogs to funding rounds and a Mountain View headquarters. According to the linked Forbes piece Enterprise users of Vizcom can use their own proprietary image sets but more problematically Vizcom is trained and built on Stable Diffusion, an open source deep learning text to image AI that was itself trained on the LAION-5B dataset, created by crawling the web for images without usage rights. Getty Images is suing Stability AI in UK courts this summer, but this isn’t the forum to discuss copyright issues and I am not a lawyer, so a reasonable question to ask is, how is this any different from a designer keeping a Pinterest board full of mood images and using them as inspiration?


Both as a student and as professional designer in an OEM studio, I deliberately made a point of not using images of other people’s cars on my mood boards. This wasn’t for copyright reasons, but because I didn’t want them to influence my work. I might (and in fact do) keep Pinterest boards of interesting vehicle sketches and renders I like, but I use these to learn and understand how they are done, and in some cases because I like the combination of colors, background and style and want to apply them to one of my own renders. I’m not copying the actual design. A good and conscientious designer might find a product detail they like and then reformat and transpose that idea into their own work. For instance, the interior of my hot rod had seats made of patches of rough black leather, bound together with big white stitches. Of course the inspiration for this came from Michelle Pfieffer’s Catwoman costume in Batman Returns. I was imspired by very specific aesthetic that was personal to me, subtly altered and adapted it and used it in a unique way on a different object. This is something only a human designer can do. If my interior images and ideas were scraped into the data set Vizcom uses, they would just end up as a tiny part of visual slurry pipes onto someone elses sketches. All the individuality, meaning and nuance would be lost in the mix of pixels. Vizcom cannot differentiate because it only looks at images, not the meaning and context behind them.
I can make Photoshop give me exactly what I want because I spent years learning the software and techniques required. I can change details and proportions of a render with a few flicks of the wrist across a Wacom tablet. I can illustrate an accurate bodyside because I have painstakingly made and saved my own set of brushes that I use to make highlights and shadows. I have actions set up to make creating wheels easier. Trying to short cut all of this with AI feels like a solution in search of a problem and not a particularly helpful one. Anything you create with Vizcom is going to need fixing in Photoshop anyway.
So you might as well put in the hard yards and learn how to render because like any digital tool Vizcom is not doing the creative work for you.
“Trying To Shortcut The Hard Work Is A Fool’s Errand”
Too right. As someone who has spent the past 29 years teaching at colleges and universities, this is the fundamental error with the current approach to “AI all the things in education”. I, too, am no Luddite. I’ve been teaching online since 2006, and I can’t tell you how many screaming matches I’ve had with faculty members who insisted that real teaching can only be done in a physical classroom (ideally with ivy on the walls). But teaching students to use AI tools for writing (or anything else) right out of the gate misses the entire point of education. If you don’t learn the basics, you can’t effectively use the advanced tools. I can give you so many examples of students failing to understand both a problem and how to solve it because they had only been trained to use a calculator or some other advanced tool, but had no concept of what the calculations meant, but I won’t bore you with all that. By all means, teach students to use these new tools, but teach them the fundamentals first so they can understand the tools’ shortcomings and use them for whatever tasks they are appropriate for.
Clearly, Vizcom isn’t ready for prime-time and it doesn’t even take a professional designer to see that. Maybe it will be one day, but it will still be nothing more than a tool and, as we used to say in the military: “You have to be smarter than your tool to use it properly.”
Because I am me, I make stuff sometimes, not for a living, as will rapidly become clear. I draw stuff, freehand on paper, then move to the drawing board, all weights and rulers until I think it will work. Because my workshop is older than me all the verniers are in thousandths. I have tried Photoshop, and I am rubbish at it, A! was less then helpful when casting phosphor bronze bearings. I am not a Luddite, Iadmire new stuff. When it works.
If you want to see some amazing work being done with AI, I suggest that you check out the Neural Viz channel on youtube. It’s really good. It’s one of my favorite things right now.
Next time try using the SE/30. Way more hp than the SE.
That’s exactly it: these are the products of lazy, locust-like consumers that generate garbage software to steal and collage the work of others without context or understanding, they aren’t creatives and they aren’t builders. Garbage product from garbage people and this has the potential to be highly damaging to creative fields by making what they do appear to the Normals as little different than what a monkey slapping keys with a fresh turd could generate. They have no interest in the work it takes for mastery of anything, for the value of the process or craftsmanship, of being in a zone where you are only immersed in creating something, and of the connection one can feel. These techbro parasites are shallow, hollow creatures without any capacity to or interest in truly understanding the why, the how, or the purpose—just monetize, consume, dispose, and NEXT!
So you’re saying your robot butler would give you something almost, but not quite, entirely unlike pasta sauce?
I’m currently reading Creativity Inc. by Ed Catmull, who is one of the founders of Pixar. He started off as a computer scientist who became interested in computer animation and eventually movie-making. There are three things he talks about often in the book that are, for me, important when looking at A.I.
The first is that he learned early on that any animation craft existed in the service of storytelling. Without the story, the animation was meaningless, no matter how impressive. He related a time when he and a group were working on a test reel to showcase their capabilities, but realized a few days before the presentation that the animation wouldn’t be complete. There just wasn’t enough computer time to make it happen. So they ended up showing the full reel anyway, despite the last half being only wireframe. Many in the audience didn’t even notice because they had been so engrossed in the story. As somebody deep into the craft of computer animation, he was shocked.
The second is that a lot of the Pixar and Disney animators he first met disliked all the new tech. They said that if it had worked for Walt, it would work for them. They had the idea that changing the tools would make the product worse. One old Disney animator, Bob McCrea, called out the younger folks on this B.S. line of thinking by saying, “If we had had those tools, we would have used them!” Walt Disney had always worked to push the technical boundaries so they could improve the quality of the craft and spend more time on storytelling.
The third is that anyone in a creative field (a very broad understanding of this) needs to embrace failure. Because creativity is the act of finding new things, and it is impossible to understand how something new will interact with the world, failure is going to happen. The more adventurous the creativity, the higher the risk of failure. Being creative means lots of failure and lots of getting up and doing it again until you get the desired outcome.
All design (all art for that matter) is storytelling. It is just that some use words and some use color and shape. A.I. is a tool that attempts to replicate human creativity. The issue isn’t the use of the tool to show something creative, it is when the tool is used to fake something creative. I have seen A.I. image creators used to great effect but the person using it was very focused on learning that tool and wasn’t satisfied until it had created what they had in their head.
The danger of A.I. (beyond the Skynet thing) is that people use it to feign creativity and therefore never learn how to actually be creative. But as technology advances, there have always been people who get by with flashy visualizations to cover for questionable creativity. When I was in school for design, the people with the best rendering skills always got hired first. Thirty years later, the most creative designers have progressed. Rendering ability didn’t matter in the long run.
Yay! why learn a skill when I could rely on inconsistent hallucinatory pixel puke and take another step toward replacing creative workers with lines of code?
Thank god tech billionaires have a new way to get richer and the rest of has fewer and fewer paths toward meaningful careers or skills. The worst thing in the world would be to keep paying and collaborating with other human beings!
“I’ll be 52 in a couple of months so clearly remember life before a computer in every home and the invention of the internet.”
Hey… Me too!
“Both sides of the front suspension are different, the lower right hand side wishbone appears to contain a headlight, the wheels are shit, and it’s grown a television antenna from the rear fender. “
LOL
And the wheels are shit not just in terms of looks, but also because they are too big to allow for any tire sidewall or suspension travel .
As you implied and has been echoed in the comments, if you want something done right, better do it yourself. If you want to spend a bunch of time training/explaining/correcting/tweaking, get an intern/volunteer/AI.
[repetitive tasks excepted]
An AI styling tool seems interesting until you realize how many BMWs, Subarus and Toyotas are in the training data set.
I’m an old Punk who researches computer security issues, mostly how we can make existing software more secure. I’ve done a lot of experiments with deep learning AI models, and they are still far from functional.
For example, I asked a question about a security model I’m familiar with. It replied with a fairly good answer, and I asked for the research papers on which it based its conclusions. It replied with three papers; I knew two (I personally know the authors and have read the papers). The third was complete BS: the author, title, and abstract were completely phony, but it was the paper that most of its “work” was based on.
Another example looked to see if the model could find specific information in several source code files that made up a program, with the line numbers in the specific files. One file that the model reported on had wildly wrong line numbers, but when queried, the model stuck to its answer and implied I was incorrect!
This is not ready for useful work.
This is an interesting article. My profession involves analysis and editing of reports so it’s easy to see where AI is creeping in. The results seem to be only as good as the data the AI engine can draw from. It seems like you do a search on the web and half if not more of the results have a decent percentage of AI generated content. I’m not talking about the summaries you get from the search engines, but the items in the results. That may be good for some of the background information needed on a subject, but considering the lack of nuance in some of the articles and papers I see, it’s being used for much more than that with really bad results.
I do wonder if we are getting to a point where people are just saying, “Fuck it, good enough” and just walking away. I hope not. I really don’t want to see a loss of creativity.
First, Thank you Adrian for the great breakdown of how you design a car from a sketch to photoshop and final image. I have tinkered with photoshop with my own auto photography and its a similar layer filled process, but not to the extent that you do as a professional. I agree that any real art is not something these generative AIs are able to do and will not be able to do for many years, unless the user knows how they work very well and puts in lots of time manually adjusting the images and prompts, but this largely defeats the purpose as this probably takes the same amount of time as doing the “old school” way, let alone the copyright concerns etc.
But for people to look cool real quick on social media? This is a boon, they can slap something into this and it will spit out something ~reasonable~ in a few seconds.
“Everything is AI this, AI that – are we now so lazy as a species that thinking and understanding are too much effort?”
Have you met the US President?
Fascinating discussion here and you’ve concisely captured the distinction between human creativity and intuition versus machine methodology. Every time I’m impressed by an AI result, I have to remind myself that it’s the product of rapid brute force manipulation, not inspiration. Also, that humans invented AI and, as far as I know, AI has not managed to create a human.
I do think the recently previewed Slate truck could’ve been rendered in Vizcom, though. Sort of kidding, even the simplest of designs still benefits from messy human brains.
we have subconsious influence, AI doesnt know what you mean by “make it cool”
you have to be very specific with your prompts, the same way you have to be specific when you put pencil on paper, but we have learned to send promts from our brains to our hands at an early age.
I’m a video game developer and AI is creeping into everything in this industry. While it can be useful for some things you still need a professional to go in and touch up or paint over something to make it workable.
The AI 3D model generators can indeed make fancy looking 3D assets. But they are almost impossible to use in a real project. The meshes are inevitably orders of magnitude more complex and heavy than they need to be and the UVs (the unwrapped version of the model you paint the texture on) is fragmented garbage. Very difficult to alter without major rework.
By the time you’ve cleaned up the AI assets you could have just made it properly yourself.
I have been doodling a bit with Vizcom myself (free version, I got it from Polestar I think) And it really is annoying how little it understand of my prompts, meaning I waste time trying to even learn how to communicate with it, and it just disregards too many things , either from the reference chosen or my sketches, and responds very randomly to settings, so I agree it’s almost completely useless to create a finished render of any kind. (at least if you care about your own design and don’t just wat to please an algoritm)
But, I still think AI tools can be useful to create a neutral background or make quick sketches to try out smaller ideas to then again use as reference for more sketching.
When you make Vizcom do more layers, it is possible to erase parts of a new layer if you prefer a part of the previous layer, so I have made some images I’m somewhat happy with, and probably faster than I would have been able to in Photoshop, as I spend my younger adult years only using PaintshopPro 7 and have no idea what adobes tools or shortcuts are called or how their menus are set up…
My personal opinion, is that AI is a roughing tool, like 100 grit sandpaper. You use it to get the general shape and idea, but the result should not be the final product. I think AI use is acceptable if it is used to kickstart the creative process, and give the artist a reference, but outside of small personal use projects, it should not be used as a final product, or sold like it is.
The 0% not a hotrod is pretty sweet looking though… May try drawing it.
It’s currently just a tool that gives you the results you prompt it to make, which may use too much energy and unethical means to accomplish its goal.
its no inherently worse than swinging a hammer.
Last time I swung a hammer it didn’t use an absurd amount of energy in order to completely miss the nail.
I will never understand why these people seek out a creative career path only to then half-ass their work by outsourcing to AI.
I got into car design so I could create something. Why would anyone doing this want to willingly give a huge part of that away?
because most people have like 10-15 really good ideas over the course of their lifetime, and that isn’t enough to fill a several-decade-long career in design.
I don’t think “willingness” is part of the equation at all. It’s really a matter of “adapt or die”. It’s easy to mock AI rendering tools today, but just look at how far AI video has come in barely a year. Like it or not, AI vehicle design firms are coming, and the sooner everyone educates themselves on AI, the more likely they stay relevant as AI takes over jobs. I’m not saying that’s necessarily a good thing, but that very much appears to be the reality.
Artificial intelligence is on a trajectory to fundamentally alter how humans interact with each other in basically every aspect. That sounds superlative, but part of my job is looking at AI trends (though I don’t work for an AI company), and when you see AI advancements in medicine, data analysis, software development, food production, engineering simulation, etc, it’s staggering, and the scale of it is a bit scary honestly.
For example, my company has a dedicated datacenter coupled to a quantum computer and fully automated chemical laboratory where an AI works with material scientists to design new materials. The scientists will enter the material parameters and objectives, the AI will design a large set of chemical experiments to synthesize that material, then feed those experiments into a quantum computer. The quantum computer runs an unbelievably vast amount of molecular synthesis simulations, and returns the results back to the AI. With a recipe “in hand”, the AI then directs the robotic laboratory to make the material for real. This is far-off science fiction, we do this stuff today.
I have found AI images useful in two forms. 1) Good for a laugh. I used AI to make my friend a poster for his birthday, something I would never have paid an artist to do. It was a ridiculous image based on something like “muscle car, zombie apocalypse, heavy metal”. It wasn’t good, but it was funny to us both. 2) Kick starting an idea. Sometimes I have gotten it to actually do the things I’ve asked and show me bits of this idea and bits of that combined that were valuable starting points.
But no, over all, AI is not end stage product, even though many major companies are using AI images as such. To get good quality, an artist would still have to redo, using the image as inspiration.
Adrian, I’m curious what you think of this design? https://www.youtube.com/watch?v=ZDhw39NwezU&list=PLycH4h-dMkRCSVH4lAw1FxiGKISMieA80
These guys used AI to design a 60s car that never existed, and are now building it from scratch. That is the part I really like, that they are metal shapers building this thing, but I would be curious about your basic thoughts on the design they chose via AI.
Beyond the moral problems of AI, touches on the other problem I have with it – it’s always easier to just do it myself instead of trying to futz around with prompts and crap to get it to do what I want.
AI just doesn’t produce anything good, and never produces anything better than a person – even a not-particularly-talented person – can make. Even the most basic functions are useless. You can get AI to write you an email, fine, and it’s going to have all sorts of fun adjectives and be significantly less clear than a grammatically iffy sentence, because it doesn’t actually know what you want to say.
And if I get the whiff that someone is sending me an AI-written email, I’m taking them less seriously and questioning their professionalism, so it’s going to be totally counterproductive.
And if I’m a professional in a field, I know what I’m doing, I know what I want, and the AI is just a barrier to getting there. And it doesn’t know how to iterate, which is one of the most important parts of creative work.
It’s easier for you because you learned without it, today’s kids are going to be more familiar with AI generation than the old methods.
My grandpa had a hard time switching from his typewriter to a computer and mouse too
Oh great. If “today’s kids” want to enter an field that gets taken over by AI they’re fucked. To actually develop a skill or craft they’ll be held back by competing with these “learning models” that don’t learn, but just regurgitate content without the ability to analyze or understand it.
Yeah, there’s some potential for AI writing/image generation to make some workflows quicker or help (an actual team of) people cycle through ideas, but the goal for these products seems to be to replace human ideation with bad copies of older human ideation. It sucks.
There’s a difference between digitizing the typewriter and pencil (to use your computer and mouse example) and trying to get code to do our thinking and creating for us.
I think you’re overestimating how much critical thinking the average adult has to do at their job
There wouldn’t be comments made on this website during the work day if people were actually hard at work.
No, I’m not, because I’m an actual creative professional (who can read a blog on their lunch break by the way).
While you’re busy licking tech oligarch’s boots, I actually have years of experience and skill that shit like this is trying to strip away from industry, and eventually society. Automation should replace difficult or arduous work so human minds can be put to good use.
The AI hype sucks even more than the product.
Hahaha no.
It’s what AI boosters always say, but it’s flat wrong.
Let’s use your example, which has a fundamental misunderstanding of skills. Moving from a pencil to a typewriter, or typewriter to computer, doesn’t mean you’re changing the skill behind it. Your fingers might move in a different way, but you’re doing the same thing. You’re putting words in an order to be understood by another person.
With AI, you’re not doing that. What you’re doing instead is getting something spat out in front of you, most of it is wrong, and you have to comb through it and fix it. At best it’s a bullshit generator.
So kids are using it in class, sure. And they’re failing those classes. Not because the teacher is anti-AI – though most are – but because they can’t actually do the work. Using AI is obvious, but it’s also so wrong that something like an essay is going to get a failing grade because it’s flat out bad.
Kids are using it for art but once they get into a creative field, they can’t succeed. The skills you need to iterate on work – the most important for professional creative work, as discussed in this article – aren’t developed, and that means they can’t do their jobs, and get fired. If you have some concept art and can’t quickly do fifty variations or respond to very granular requests, you can’t do visual creative work professionally. You need the skills from the beginning, and if you don’t have them the job is impossible.
The lie that AI boosters are telling you is that it’ll save time. The problem is that it’s so wrong, so often, and so unable to effectively communicate that anyone who actually does rely on it will be forced out of their field or unable to make inroads. They’re not going to be able to learn in classes or make critical judgements. And they’re going to have shit lives because they were hoodwinked by bad tech.
At the end of the day, a tool that doesn’t help you do what you want to do effectively is a bad tool.
The iteration comment is something you can argue, but the problem is that each AI prompt starts from scratch. So a design director wants you to try a few different headlights, you think you can just run that through AI. But, the problem is, it’s starting over, so the one iteration your director wants – let’s try something rounder, but otherwise the same – can’t be done because it keeps going through the algorithm over and over again.
It’s a problem that has been encountered a lot when artists have been forced to use AI, it’s bad at iterating.
Totally different industry, but I’m being subjected to some AI training at the moment. One of the slides, completely devoid of context, was a quote from Satya Nadella, which reads “When the paradigm shifts, do you have something to contribute? Because there is no God-given right to exist if you don’t have anything relevant.” Ain’t that some Nazi, useless eaters, RFJ Jr. kind of shit? He was talking about as a corporate entity, not having a right to exist, but I felt really threatened by that slide.
Wild how easy it is for tech bros to jump into bed with fascists. They often have a lot of the same core beliefs, just in different contexts.
It’s so true. A big reason the internet has contributed to the popularity of the current neo-fascism is that it allows a large segment of the population to disregard humanity easily by creating a false community. An echo chamber focused on a theoretical perfection that can never exist outside of a computer model and message board. They use that to justify any short-term evil because it is needed to fulfill their vision. Never mind that their vision is antithetical to human nature or the well-being of people and planet. Meanwhile, sane, decent people are often willing to ignore it because it is just an online post and therefore not a threat. That simplistic thinking got us to where we are now.
Algorithms like Facebook’s that survive off of outrage baiting have amplified the worst ideals for clicks, and AI will intensify it if for no other reason than it will speed up the progress of misinformation.