Paint it Black
Recently, I’ve been putting together some material for a friend’s band tour to Vietnam and Cambodia. This has been a great opportunity to try another AI Workflow experiment.
Paint it Black is the name of a Rolling Stones song that has the opening lyrics:
I see a red door
And I want it painted black
No colors anymore
I want them to turn black
You can listen to it here. You’ll know it when you hear it.
A product of the songwriting partnership of Mick Jagger and Keith Richards, it is a raga rock song with Indian, Middle Eastern, and Eastern European influences and lyrics about grief and loss. London Records released the song as a single on 7 May 1966 in the United States, and Decca Records released it on 13 May in the United Kingdom.” – source Wikipedia
Plan B
When I was 17 I played in a local Rock Band called ‘Plan B’. Peter the bass player split from another band and that’s the name he chose. A trombone playing friend Patrick and I joined the group as their brass section. I played the trumpet.
We played a lot of local gigs and supported many of the national and international touring acts. It was a pretty amazing experience. The band plays a mix of originals and covers. That’s how they started.
A few years ago some of the members of Plan B, ‘got the band back together’. Though they did something that we never did back in the day. They started touring overseas.
In fact the band had their first full rehearsal for 3 days in Ho Chi Minh City.
Patrick says:
It started with three days of rehearsal in HCMC, during which time the band not only honed the songs for the tour, but also in a number of cases, first met each other face to face.
They went and did a series of gigs in Vietnam and Cambodia. Since then they’ve been back to Vietnam and Cambodia. Toured the Mekong (Cambodia and Laos). They’ve been to Indonesia, London, Glasgow, Birmingham, China, Mongolia and of course Australia. Now they’re heading back to Vietnam and Cambodia in November 2024.
The Poster
Paint it Black is one of the feature ‘covers’ that the band plays on their tours. Patrick rang and asked if I would like to put together a poster. Also other items like social media graphics. I did all that, and I’ve also created a QR for an extra bit of interactivity.
Last year I used this method to create a T-shirt design and that is here: What’s the Reality of AI Design. I’m very keen on AI experiments and have been learning practical implementations. This coupled with the code work has been great.
I’ve been a graphic designer for 25 years. I’m using Generative AI for expressions of my ideas. To get the ball rolling. If I thought there was a better traditional technique for a client budget, I would use that.
For this poster I’m using stubs of generated material. I’m taking them into Photoshop. Choosing typography and layout. I’m designing the logo. I’m using Adobe Firefly in Photoshop to remove blemishes and stitch things together. It feels like stepping in and out of different kinds of workflow at the moment.
The Production
- The Image: The imagery from the song was appealing. I incorporated it into the prompts and that developed the black hat, stonework and colouring. I also prompted the camera, style, lighting, location and rendering engine. With AI you have to “embrace the suck”. I got many different versions. A lot of rubbish. I made one version with a different image. The colour and tone were all wrong. I changed to the style of this second image and it started falling into place. I used the image ‘seed’. This tells the following images to use the same starting points for consistency.
- A Usable Image: I used Adobe’s neural filters to enlarge the resolution 4 times. I painted into part of the image and fixed some things I didn’t like. I used unsharp masking to make the image clearer.
- The Band Logo: This was a PDF that I took into PhotoShop as a ‘Smart Object’ so I could keep resizing it without ruining it. I kept it in a group with the two lines of type.
- The Typography: I used a typeface called ‘BlueCake’ and stupid me. It doesn’t have any numbers in it. Who does that? That has turned out OK as next to the QR code I like the look of the “NOVEMBER 2024”.
- The ‘Black’: I generated many different woodcut types. I cut the letters together that I liked into the general outline of the ‘Black’. I took these into Photoshop and created layers. I distressed them until I got what I was looking for. I made several black and white layers and kept moving them about until I got this look.
- The Layout: I used my ‘eye’ to decide how I wanted it to look. We added some details and removed others. I got client feedback.
- The Vignette: I made a transparent to black vignette (a gradient) to overlay the extended image. This is for social headers. That way I’m keeping the style of the image and then allowing the logo to be larger on the socials. These may change a bit as Patrick uses them.
- The QR Code: I created a QR code online to take users of the code to the tour page. This is a large PNG file format that I was able to take into the large poster. I have previously printed these on t-shirts as another test.
The Output
- The Poster: The resolution has to be A3 at 300 d.p.i. This is for high resolution or digital printing. I’ve provided a flattened PSD (Photoshop) file to Patrick and he will overlay details over the poster, for the individual gigs.
- Socials: Instagram Tile, Facebook Headers as JPG images 72 d.p.i at the sizes they are required.
- Teaser Image: The teaser image is used in posts to intrigue and start a story. We may make other images using this AI ‘seed’.
- Page Background: For the website page I filled the image with 90% black to just give a shadow of the main image and that is in the WEBP format to be very light in file size. It will probably be set to ‘cover’ as a background image, to work across responsive devices.
I think this has been another useful experience using AI workflow to deliver an end product.
If you’d like to talk to me about this kind of workflow; ring me on 0405 129 049 or email at virgil@virgilreality.com.au
Great rundown, Virgil. Did AI generate the model (girl) in the picture as well, or are you able to insert a real picture and create a hybrid image?
Brett. Thank you. Yes the model in the art is generated from a series of prompts, until getting to this final result. If I had live photography to incorporate, I could have done that. In that case I would have had to either light the shot to match the rest of the art, or generated the lighting in the art to match the shot. Otherwise it gets a bit “Uncanny Valley” and just looks odd.