TWENTY-EIGHT

Image © Jose & MidJourney

It was 1986 and I was doing my design BA at IADE in Lisboa, and I went to visit a startup called Novodesign in a coffee roaster smelling warehouse in Campo de Ourique, a multi service design company with a strong focus on branding and identity. And I fell in love with it and said this is where I wanted to work, once I graduated. And in 1989, I started as an industrial designer in a very small team, doing product and interior design around large branding programs. Back then, the CAD software of choice was Autocad on DOS, and I learned AutoLISP to improve performance. I loved the Macintosh IIgs, and there were a few CAD software running on it, MiniCAD and ClarisCAD. I had learned drafting and technical drawing with Rotring pens, so CAD was easy to learn and a relief. From 1991 to 1993 I was in London doing my MA at Central Saint Martins, more CAD, and I bought my first portable Macintosh Powerbook 100, a glorious black & white 9.8” screen and a 1.44 MB floppy disc drive, I would take it to the library, read and type content for my thesis. This was all fun and easy, as I remember.

I returned to Novodesign in 1993, took on the role of design manager and started a new phase of growth that lasted till 1997, when I started my first design company. But somewhere in 1995, Portugal Telecom introduced the first ISP and we all had to deal with this new thing called the Internet. Designers preferred Netscape Navigator, it felt very different, it was a completely new paradigm, and I remember I felt so lost I approached a teenager that was already using it and offered to pay him to teach me, and he laughed in my face, stating it was easy, all I had to do was try to use it. I did, and a new journey began, thought I learned much more sophisticated CAD along my way (who remembers Alias Wavefront running on a Silicon Graphics Indigo) and hundreds of software packages to do almost everything, it all seemed simple and natural. Until 2018 when Open AI introduced ChatGPT, and then a flood of Gen AI packages hit the market, including Midjourney in 2022. Until now I had not felt a disruption like the introduction of the Internet, and I have been trying to make sense of it.

It’s not the software learning curve, Midjourney via Discord was a walk in the park compared to learning CAD, or even using 30% of Microsoft packages like Excel. It’s also not the magic of it all, I still think what you can do with SolidWorks when creating a complex product like an engine is closer to magic than anything else. It’s also not the imagery per se, I have come across artists using Photoshop and Procreate to create astounding imagery, and KeyShot can produce realistic images that surpass anything Midjourney can do. A lot more work, of course, and that is probably one of the big advantages of this new technology, the immediacy of it all.

I guess the chat solutions create a sensation of disruption because of the human-like results of an interaction which, looking back, was stuck in a simplistic, almost binary exchange (we hated Siri, and Apple knows and is changing it by adding… ChatGPT). And the chat to image solutions create amazement because we interact with it as a MacDonalds drive-through, where you can ask anything on an infinite menu, you might have to look carefully before you bite into it, but most of the times it looks like food. When combined, these two types of tools create a whole new interaction with software, that will drive new hardware and above all, will dictate how all software is expected to operate in the future.

But it’s not magic. I have been creating imagery to illustrate my weekly posts since the beginning of 2024, and it’s a laborious process. I write the post, with no help from any AI tool, then I drop the text on ChatGPT and ask for prompts for Midjourney, then I start generating imagery. But while a few times I was able to reach what I wanted in less than 30 images, most times I have generated more than 150. The prompts that ChatGPT generates, after instructions on conciseness, focus, level of inspiration, abstractness, and other literary explorations, many times do not get me what I want in Midjourney, where I end up resorting to direct prompting. Then Midjourney requires rounds and rounds of image generation, with prompts that can be 200 words or 20, where I may start again from the basic request and build it up because the tool becomes confused, you have tricks to get to what you want (word order, wording, adjectives,…) and you have styles (you own and other’s), and type of shot (learn about photography for better results). And sometimes it is frustrating, much more than trying to get the right fillet or blending surfaces in CAD, because you are dealing with a black box. I am lucky that I have people in my team that have become experts at using some of these tools, (thank you Ray Zavesky), but above all, I am happy that I am still curious and willing to do the work. This revolution is making me recognize that all my design training has been a good investment, the more options and variety we have, the more important it is to know what you need and why. I am optimist about the disruption, and I think designers will not only survive, but thrive as true magicians. In the end, a deck of cards can be used in so many ways, but only a few people can do magic with it.