When architect Guillermo Taberner was testing some materials for a single-family home using Lumion, he just couldn’t find the right stonework for the job. He decided to try Dalle2, a tool that creates images using artificial intelligence, and…

textura obtenida con inteligencia artificial

…Voilà!  The above image is the result of providing AI with a few prompts.  When Taberner shared this anecdote on social media, we asked him for more details about this practical case of AI application and to share his vision of how this technology will shape what architecture will be like in the future.


How did you get the texture you were looking for?

We had a client who wanted a specific type of stonework that Ramón Esteve often uses, but I couldn’t find the right texture in any online library.  So, I tried three AI image generation tools (Dalle2, Stable Diffusion, and Midjourney).

After trying various prompts and different iterations, I found a very similar type of stonework.  Using photoshop and a few other tools, I turned the JPG file into a PBR, which allowed me to import it into a rendering engine that I often use, and I ended up with a pretty good result.

I conducted other tests for post-production and to include details in the renders, but the quality is often inferior to what I achieve with my usual workflow.  In addition to these image generation tools, I’ve also used ChatGPT to summarise or respond to work emails, but that practice is more common.


Which AI tools do you often use in your architectural work?  Do you just use image tools, or do you also use text and video tools?

They aren’t part of my usual workflow because they are still in their early stages.  I’ve always been interested in experimenting with different tools in my architectural visualisations and in the design process in general.  I really believe that the different applications of artificial intelligence offer a lot of possibilities.

For months, I’ve been following scientific educators and digital magazines that discuss this topic, which has allowed me to experiment with some of these AI tools, particularly those that generate images like Dalle 2, Midjourney, and Stable Diffusion.  But most of the time it’s been about understanding how they work and playing around with them.

How do you think these types of tools could improve the sector?

I’m convinced that this is just the beginning.  Generally, I think that the world of architectural visualisation is going to experience something similar to what happened ten years ago when simpler and faster programs for producing renders started popping up. [At the time], it was common to render using programs with lots of parameters such as VRay which, to be honest, are still being used and, in terms of quality, are the best.  But, as I was saying, a decade ago other types of software began to emerge that were much easier to use and that allowed you to complete a render in a much shorter timeframe, such as Twinmotion, Lumion, and D5.  This time around, I think AI tools will result in an even bigger jump [in terms of efficiency and simplicity].

There are already some existing programs, like Veras AI, that allow you tp use a prompt to do some post-production when modelling.  That’s a huge development and one that gives us an idea of what’s to come.  What will be really interesting will be software that includes these AI tools for specific applications, like texture generation in PBR, specific vegetation, or post-production filters.

No matter what, we will have to learn to understand how these tools work and to perfect our prompts, and software will have to start implementing them if they want to keep up with the competition.  Now more than ever, I think it’s time to adapt or die trying, which is exactly what will end up happening to most.

How did you learn to manipulate AI interfaces, understand them, and create prompts?

I learned online, using Youtube, forums, and recommendations from colleagues, as I’ve done with most tools and software.  I don’t think it’s a novel approach; most professionals have probably been on the same learning curve as me over the last ten years.


Have you used an architecture prompt marketplace?  Do you think they are useful for those who are yet to learn how to ask AI questions?

I had a look at the Midjourney forums to see how to make my prompts more specific, but I’m not really familiar with marketplaces.  It’s the same as modelling: no matter what, it’s essential to know the commands.  Once you can do that, you can use an online library, but having a handle on the basics will give you higher quality results.

You’re a Youtube content creator and your work centres on the sharing of information.  How important do you think it is to share this type of knowledge when it is yet to be regulated?

I wish I could spend more time educating others, but you need a lot of time to do it.  My master’s project was essentially a short film, and it made me see that, if you want to create high-quality content, you needed to spend hours and hours on it.  That’s why I think that those who do it and who are able to juggle it with their work as architects deserve a lot of credit.

Whatever content you learn from architecture publications, which used to be just magazines, can now reach you in other ways thanks, in part, to [online] educators. I’m not saying that a Youtube channel is on the same level as an architecture magazine, but I think it can still be very enlightening if the content is of high quality.



If this post has piqued your interest, make sure you read our other articles about artificial intelligence and its impact on architecture and interior design.