I Asked a Computer to Create Linocuts For Me

How Dall-E 2 has the potential to revolutionize the artistic process.

Welcome back to all of my printmaking lovers!

This weekend is my birthday. Naturally, I will be spending this time in Yosemite. So, there will be no newsletter next week as I will be taking a short break. I will answer your questions and share a little about my trip to Yosemite on Monday September 5th!

Daniel

In this week's issue:

  • Dall-E Make Me Another Warhol

  • We talk to Dylan Goldberger

  • Obata Art Weekend Last Call

Video of the Week

 Dalle-E Make Me Another Warhol…

Since I was a kid, I have been both fascinated and terrified by the 2001 A.I. - Artificial Intelligence movie. Twenty-one years later, A.I, has revolutionized the world in how one interacts via TikTok, Instagram, or when you shop on Amazon. Everyday, companies find new ways to use A.I. to make our lives easier. Some would argue it keeps us glued to a screen. Then someone along the way had the idea of asking A.I. to create art. This is where we come in.

I first read of A.I. generated art when The Atlantic published a story about how an A.I., “got” a show at the renowned HG Contemporary Art Gallery in New York City. I figured that if the pieces had made it to a gallery, it was only a matter of time until they sold for crazy amounts of money. Then it actually happened, an A.I. generated “painting” sold at Chrsties for $432,500!

AI-generated "faceless portraits" by Ahmed Elgammal and AICAN. (Artrendex Inc. / The Atlantic)

I was shocked to be honest. I could not wrap my head around how a computer could generate art. Around the same time this occurred, the Twitter account: @images_ai went viral as users started sending in prompts for the A.I. to illustrate. It didn't seem the tech would ever get better than it already was.

Then came San Francisco based OpenAI and Dall-E, who completely demolished the status-quo. I first read about Dall-E in July, and learned one could join the waiting list to gain access to the technology. Last Monday, I was granted permission to create an account with OpenAI and test Dall-E for myself.

Hello world! My name is Dall-E

Dalle-E is a neural network of an artificial intelligence that has been fed millions of images and their text descriptions. When you use Dall-E, you write a prompt of what you would like to see and then Dall-E spews out what it thinks you want.

Let me show you what I mean…

My first prompt to Dall-E was something of a control test. I wanted to ask for something so specific, the A.I. would have a hard time generating.

Prompt: A monochromatic linocut of Yosemite National Park printed on handmade paper.

I finally was face to face with the thing I only thought was capable of existing inside movies. “Will Dall-E be a better artist than me?,” I asked myself.

The results were of a mixed bag. Dall-E knew the meaning of monochromatic, and that Yosemite National Park has mountains. It did not seem to know what linocut was and instead generated a fantastic range of what look like watercolor or ink sketches. Maybe this was too hard. I then decided to go a little easier the second time.

Prompt: A reduction color woodcut print of Half Dome in the style of Albrecht Durer printed on hand made paper.  

This time, Dall-E knew exactly what Half Dome was and gave me four (4) excellent renderings of the famous granite formation. As to their execution in Durer’s style and a color reduction woodcut print not so much, there were hints of texture.

Maybe I was being too specific with Dall-E, so for the third prompt, I decided to make things a bit easier:

Prompt: A linocut print of a boat chasing a river in the style of Salvador Dali.

This time, Dall-E almost nailed it. The river and the boat were there. They definitely looked like linocuts. Although these prints were not as Salvador Dali, as I was expecting.

I was curious if Dall-E could really make a linocut. I decided to ask for something that would be fairly easy and for which there are definitely plenty of reference pictures.

Prompt: A linocut print of a Catrina during a Dia de los Muertos celebration in the style of Jose Guadalupe Posada.

The results were impressive. Dall-E had absolutely understood what linocut was at this point. It understood the word celebration in the cultural context of Day of the Dead. It wasn't exactly in the style of Posada but it was close enough.

Naturally, it was time for another curveball.

Prompt: A lithograph of a dog making pancakes inside a modern kitchen.

The renderings were hilarious. The compositions were very good and the dog looked more like he was stealing the pancakes. The kitchen might have been modern in the late 60s, but they did have the feel of lithographs, particularly the way the textures came through in the multicolored image. Dall-E seemed more painterly and was almost able to understand color better. I began pushing for that.

Prompt: A color woodcut print of San Francisco in the style of Ukiyo-e

The four skylines of San Francisco that Dall-E generated left me flabbergasted. An artificial intelligent machine had come up with these compositions, yet, they looked like something that I would come to expect on a postcard or in the artwork being sold at Fisherman's Wharf.

At this point, I did begin to wonder how long it would be until algorithms and A.I. displaced me as an artist? If that is the case, I decided to have some fun with Dall-E.

Prompt: A baroque painting of a surfer wearing a wetsuit getting ready to joust in the style of Diego Velzquez. 

Prompt: A Frida Khalo oil painting of an opera singer.

Prompt: An Andy Warhol style screen print of a cat

Prompt: A hand drawn sketch of Michalangelo’s David

I began to notice that Dall-E did not produce identical copies of an artist’ style, rather it pieced together different elements associated with them. OpenAI calls this process diffusion. Dall-E pays close attention to patterns and later alters those patterns until it recognizes aspects of an image.

The portraits produced in the style of Frida Khalo, were clearly not done by Frida, but they could have been done by a human. I then began to wonder how Dall-E would react to prompts that required it to create more photorealistic renderings. This ladies and gentlemen is the area that Dall-E is terrifyingly good at.

Prompt: A realistic rendering of an artist studio

Prompt: a photograph of a wetsuit walking through the forest

Prompt: realistic photos of a wedding

Dall-E as a Tool

This technology also has its critics. Last week, Charlie Warzel, a freelance writer of the Galaxy Brain Newsletter went viral on Twitter. Warzel published a piece with art generated by Midjourney, another image generating A.I. The criticism revolved around a common fear among the creative community, that A.I. will displace artists. Werzel inadvertently turned those fears into a reality, despite having written about the problems surrounding A.I. art a few weeks ago! Oh, howI love the internet. Werzel has since addressed the situation and we can assume has learned from this unsavory episode.

OpenAI says that the purpose of Dall-E 2 is to empower people to express themselves creatively. While understanding how advanced A.I. systems see and understand our world. Personally, I don’t anticipate A.I. will completely displace artists and Illustrators this decade. However, there is no doubt that the technology will only get better - one word at a time.

I believe that Dall-E could be a great tool for artists and creatives. Users on Twitter have used A.I. generated images as a springboard for other projects and I can see how this can be useful. Instead of sketching an idea, you could ask Dall-E, in a few words, what you want to make. Then you can use the renderings provided to proceed or pivot your design. Think of all the time this would save you, particularly in the early stages of the creative process. I have already used Dall-E to test out some concepts and compositions that I am reworking. Expect a newsletter about this experiment soon!

Artist Highlight: Dylan Goldberger

This week, we talked to Dylan Goldber. You may have come across one of his playful compositions displaying dogs or his colorful skateboard designs. Dylan shares with us his approach to composition and much more!

Instagram: @dylanjg

Daniel: Your work is fun, and instantly approachable with a wide array of pets. What is your approach to developing a composition/design?

Dylan: For sketching out any big scene with multiple characters, the hardest part is always the initial sketch. I start by drawing a million scribbles in a sketchbook trying to nail down an appealing composition and character placement. I use TONS and TONS of reference photos in big scenes to get all the poses and details. Once I figure that out I try to draw out the background and characters all separate and then composite them in photoshop, keeping everything on separate layers so it's easy to move things around…

You can read the full interview here.

Collect Dylan’s work here: https://doggonestudios.com

The Obata Art Weekend

This weekend, August 26 -28, 2022. I will be traveling to Yosemite National Park to participate in the second annual Obata Art Weekend!

My printmaking demonstration will be on Saturday August 27th with two separate sessions:

  • 10:00 am - 12:00 pm

  • 1:30 - 3:30 pm

Location is Shuttle Stop #6 on the Yosemite Valley Loop.

Art workshops and demonstrations will be limited to 20 people so that it gives everyone the best experience. To sign-up, please email [email protected] and put Obata Art weekend in the subject line. Please list your top 3 choices for programs and the NPS will get back to you as soon as possible to confirm your place on a program.

I hope to see you there! To view the full calendar of events, click here.

Hey, Hold Up!

Do you have a printmaking or artist-related question you want me to answer?

Let me know here! Was this email forwarded to you? Sign-up here.

Thanks for reading. See ya next week.

Join the conversation

or to participate.