Photography


Neural Style Transfer – Mosaic Style Source

We are looking for the source of the mosaic style for neural style transfer, we came across the following image that was apparently used by several people as input but we could not identify the source of it.

The photo depicts a lady holding a flower on stained glass.

(c) copyright 2006, Blender Foundation / Netherlands Media Art Institute / www.elephantsdream.org
This an audio-less re-production of the “Elephants Dream” by Blender Foundation after it was parsed using a neural network that attempts to transfer the same style as a photo that depicts a lady holding a flower on stained glass (https://bytefreaks.net/photography/neural-style-transfer-mosaic-style-source). We do not know the origin of this art yet…
Audio will be added at a later stage.
Advertisements

Neural Style Transfer – Feathers Style Source

While looking for the source of the Feathers style for neural style transfer we came across the following image that was apparently used by several people as input.

After some special Google-Fu we were able to find a page that posts the above watercolor paint with feathers leaves and petals which states that the painter is Kathryn Corlett.

This an audio-less re-production of the “Elephants Dream” by Blender Foundation after it was parsed using a neural network that attempts to transfer the same style as what seems to be the work named “Feathers Leaves and Petals” by Kathryn Corlett to each frame of the video.
Audio will be added at a later stage.

Neural Style Transfer – Candy Style Source

While looking for the source of the Candy style for neural style transfer we came across the following image that was apparently used by several people as input.

We tried to identify the painter of the above piece but we were not able to pinpoint the exact painting. What we found was a painting called “June Tree” by Natasha Wescoat which has looks extremely similar to this so we assume that the input must be a paint by mrs. Wescoat.

This an audio-less re-production of the “Elephants Dream” by Blender Foundation after it was parsed using a neural network that attempts to transfer the same style as what seems to be a variation of the “June Tree” by Natasha Wescoat to each frame of the video.
Audio will be added at a later stage.


neural-style: An open source alternative to Prisma (for advanced users)

Recently we stumbled upon a very interesting project, it is called neural-style which is a torch implementation of an artificial system based on a Deep Neural Network that attempts to create artistic images of high perceptual quality.

According to the authors, this tool is based on the paper of Leon A. Gatys, Alexander S. Ecker and Matthias Bethge which is called “A Neural Algorithm of Artistic Style” (which is available to read for free here).

What this tool does is ‘simple’, it takes as input two images, the style image and the content image and using the style image, it tries to recreate the content image in such way that the content image looks like it was created using the same technique as the style image.
Following, is an example of a photograph that was recreated using the style of The Starry Night.

This tool offers a ton of possibilities and options, which we still did not play through yet.
Overall, we are very happy with the initial results we got. The final renderings look really nice and the fact that you get to choose your own style images it gives this tool a very nice advantage.

What we did not like though, is that it takes a lot of time and memory to complete the rendering of a single image (especially if you do not use a GPU to speed up the process).
This issue with the resources is normal and expected, unfortunately though it limits the fun out of the system. Each experiment you make is blocking you for some time and you cannot fiddle with the results in real time.

We installed this tool both on a Fedora GNU/Linux and on an Ubuntu with success.
Following are the exact commands we used to install it on Ubuntu and convert our first image (the one above).

cd ~;
git clone https://github.com/torch/distro.git ~/torch --recursive;
cd ~/torch;
bash install-deps;
./install.sh;
source ~/.bashrc;
sudo apt-get install libprotobuf-dev protobuf-compiler;
CC=gcc-5 CXX=g++-5 luarocks install loadcaffe;
luarocks install cutorch
cd ~/
git clone https://github.com/jcjohnson/neural-style.git;
cd neural-style/;
sh models/download_models.sh;
#After everything is complete, it is time to create our first 'artistic' image.
th neural_style.lua -num_iterations 1500 -image_size 1250 -gpu -1 -style_image "/home/bytefreaks/Pictures/Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg" -content_image "/home/bytefreaks/Pictures/Aphrodite Hills Golf Course - Paphos, Cyprus.jpg"

Our input images were the following:

Content Image

Style Image

Below are the intermediate steps the tool created until it reached the final rendered image.

This slideshow requires JavaScript.

Useful links


Hugin 1

Hugin is a cross-platform open source panorama photo stitching and HDR merging program developed by Pablo d’Angelo and others. It is a GUI front-end for Helmut Dersch’s Panorama Tools and Andrew Mihal’s Enblend and Enfuse. Stitching is accomplished by using several overlapping photos taken from the same location, and using control points to align and transform the photos so that they can be blended together to form a larger image. Hugin allows for the easy (optionally automatic) creation of control points between two images, optimization of the image transforms along with a preview window so the user can see whether the panorama is acceptable. Once the preview is correct, the panorama can be fully stitched, transformed and saved in a standard image format.

— From Wikipedia: https://en.wikipedia.org/wiki/Hugin_(software)

We have a set of images, which if stitched together they can produce a panorama. We wanted to create a panoramic ‘artwork’ using Prisma (because we like their work).
Unfortunately, a current limitation of Prisma, is that it will only produce square images of maximum size 1080px, so we could not supply it with a panorama an expect it to create the panoramic ‘artwork’.

To achieve our goal, we converted our images to ‘art’ using Prisma and we supplied the result to Hugin to create the panoramic ‘artwork’.
By using the simple wizard of Hugin we were able to create these lovely images presented in this post.

Note: Since the images we loaded into Hugin were processed by Prisma, they lost all their EXIF information.
Because of the missing EXIF information, Hugin could not know the focal length of the camera so we had to supply it manually.
In our case, for Galaxy S4 it was 35mm. After supplying this information everything went smoothly as you can see in the results.

We created two sets of images, one using the Mosaic filter (above) and another using the Dallas filter (below).

Useful links: