You may have heard the buzz about how Dalle-2 can use your written words to create photoreal art in any style, (and about the waiting list), but did you know that you can do the same thing with Disco Diffusion?Read More
Tell your friends I’m on the gram @grant_diffendaffer !
But while you are here check out this forest find:
I’m going to be posting lots of photography, AI generated visual art, photogrammetry, and other sorts of eye candy. This is just a quick Neural Style Transfer filter job done in Photoshop. I photographed the actual safe with mushrooms spilling out last fall at Salt Point State Park in Sonoma–a place which is like a safe full of gold for mushroom hunters.Read More
A time-lapse of the May 15, 2022 lunar eclipse as seen in Canyon, CA. 320 individual frames shot on Sony A7ii. Camera motion with Vixen Polarie. RAW photos processed in Lightroom with pretty heavy noise reduction applied. Frame animation in Photoshop and post in Adobe Premier Rush.Read More
I made it out to the Sierra foothills over the weekend for a visit with dear friends and some quality stargazing. Dark skies or not, my camera can see far more than my eyes because it can collect light over a long exposure to resolve faint details.
Special gear? A bit, but less than you might think. Just a full frame mirrorless camera with a nice lens, and a compact star tracking mount that offsets the earths motion by rotating the camera gradually and thus reducing star trails and allowing light from faint objects to home in on a pixel on the camera sensor.
I’ve only done a bit of this type of photography, focusing more on photogrammetry, landscape, and macro. My mounting and tracker alignment needs a bit of improvement, but I am already enraptured with the results.
Here are a couple faves from Sunday night:
This first shot shows the benefit of stacking raw photos, which can increase faint light signals.
The photo on the left features just some minimal exposure adjustments in Lightroom, the greatest being a large white balance drop. The photo on the right is the combined light received by eight 150 sec exposures stacked together in Sequator, along with 3 equal length exposures with the lens cap on.
Lens on?!? Yes, in addition to being able to aggregate light signals, Sequator can remove moving objects like the satellite you see above, as well as removing color noise caused by faulty pixels in your camera. To do this, it separates noise from signal by looking at what should be a completely black image (cap on) and selecting for the “hot” pixels to remove later.
The next gallery shows a few starry landscape shots that are all basic long exposures taken on the star tracking mount. No white balance adjustment on these ones (more red in the sky).
For the zooming:
- Sony A7ii camera
- Sony FE 24-105mm F4 G OSS lens
- Vixen Polarie star tracker
- Lightroom CC
As an artist that treads a line between digital and physical, I have always been fascinated with imagery. These days I am leaning into this more as I explore various means of using digital tools to produce beautiful images. Here is a favorite early astrophoto that I shot last summer of the Dark Horse Nebula rearing over a tree, surrounded by the brighter galactic bulge of the Milky Way.
This was a 168 second single exposure at f4.0, iso 3200. Shot on Sony A7ii with FE 24-105mm F4 G OSS using a Vixen Polarie star tracker. Processed in Lightroom.
I need to find some dark skies soon!Read More
As you may know, my great love, next to 3D printing, is photogrammetry, a.k.a. reality capture. I have been working on them side by side for over 10 years, with printing being the perfect path from digital to physical, and reality capture doing the opposite.
My reality capture collection currently spans about 10 years of shooting, numbers about 30,000 photos and many many models. I’ve been putting my best models up on Sketchfab, and there are 40 up for your viewing now–including a dozen new ones and more to come.
You can view the model that I created for the Gods in Color exhibit at the Legion of Honor. Be sure to check out both the capture of the Parthenon panel plaster, as well as the colored version I created in Zbrush to project onto the plaster at the museum show.
I’ll be posting more about these models in the upcoming days, so stay tuned! Better yet, Follow me on Sketchfab!Read More
I’ve been working with neural net image processing for a few years now, trying to figure out how to make more powerful creative use of this class of tools, which using AI to recognize and develop patterns. Here is a favorite photo, that I took of a factory from the Willamette River at Oregon City, that I have styled with the neural filters that are built into Photoshop.
There are a number of online tools that take advantage of generative AI image processing algorithms, as well as a number of mobile apps. Wherever I start processing, I often end up taking it back through Photoshop for additional compositing and adjustments. I was excited then when Adobe decided to include some of these types of filters within Photoshop.
Many people are familiar with the strange and hallucinogenic images that come out the Deep Dream algorithm, which is trained to see things like animals and architecture in images where they may or may not be originally, and then to paint its own interpretation that focuses on the objects that it identifies.
While I’m a fan of Deep Dream, I’ve always been more interested in style transfer algorithms that allow me more deliberate control over the final output.
Besides Photoshop, I am a particular fan of Deep Dream Generator, which allows a user to explore a variety of ways to style their own images, including style transfer and Deep Dream.
The real nitty gritty though happens in Python language programming. Python code is compiled in Jupyter notebooks, which can be run either in a local Python environment, or someplace in the cloud, like Google Colab, which makes it easy for someone like myself to start digging in to the nuts and bolts of these processes.
Generally speaking, these algorithms all fall under the umbrella of TensorFlow, which is an open source machine learning platform developed by Google. TensorFlow features a number of pre-trained models which users can employ to process their own images. A great way to explore and run the code is through tutorials, where you can check out the basic Style Transfer and Deep Dream methods of processing as well as others, such as Pix2Pix.
If all you need is a fun and easy way to play with a neural network, check out Wombo, which will let you make a picture from as little as a word.
At the time, my interest in photogrammetry was really taking off, and I determined that the boot would make an excellent subject.
I’ve had a few opportunities to take photos of it over the years, but it took until just recently where I had the combination of ample quality photos, suitable software, and some way that I could present my model with a bit of storytelling that is somehow in service of the larger project.
That has all resolved this year as something of a holiday video card and a love note to a boot, and the amazing community of artists that made it happen.
My previous post included a simple turntable video of the model that I created of Storied Haven by using a few hundred photos shot on my Sony A7ii and Reality Capture photogrammetry software. After that I took the project into Twinmotion, which is real-time architectural rendering software. Twinmotion is truly a dream for visualizing 3D models in beautiful fashion with ease.
I finished it all off with Adobe Premiere Rush. The hours involved were basically countless, as the model went through various renditions and got shelved while I waited for better software and photography. Ultimately I had to run it through ZBrush for some manual modeling on the toe, which wasn’t coming through clean with the photos that I had.
I may go elsewhere with this project, as it is just too fun to put down. It wanted to be off my desk and out in the world though, so I hope you all enjoy it, and to you all…