项目作者: giuseppebonaccorso

项目描述 :
Deepdream experiment implemented using Keras and VGG19 convnet
高级语言: Jupyter Notebook
项目地址: git://github.com/giuseppebonaccorso/keras_deepdream.git
创建时间: 2017-07-09T11:15:33Z
项目社区:https://github.com/giuseppebonaccorso/keras_deepdream

开源协议:MIT License

下载


Keras-based Deepdream experiment


See also: https://github.com/google/deepdream

Blog entry: https://www.bonaccorso.eu/2017/07/09/keras-based-deepdream-experiment-based-vgg19/

This experiment (which is a work in progress) is based on some suggestions provided by the Deepdream team in this blog post but works in a slightly different way. I use a Gaussian Pyramid and average the rescaled results of a layer with the next one. A total variation loss could be employed too, but after some experiments, I’ve preferred to remove it because of its blur effect.

Requirements


  • Python 2.7-3.5

  • Keras

  • Theano/Tensorflow

  • SciPy

  • Scikit-Image

Examples

(With different settings in terms of layers and number of iterations)


















Berlin






Berlin






Cinque Terre






Rome






Tubingen






Taj Mahal


Adding noise

A good suggestion provided in this blog post is adding some noise to the original image. In this way there’s a stronger activation of different filters. I suggest to try different values and/or removing the noise from the processed image.

  1. def process_image(image, iterations=2, noise_level=5):
  2. # Create bounds
  3. bounds = np.ndarray(shape=(image.flatten().shape[0], 2))
  4. bounds[:, 0] = -128.0
  5. bounds[:, 1] = 128.0
  6. # Initial value
  7. x0 = image.flatten()
  8. # Add some noise
  9. noise = np.random.randint(-noise_level, noise_level, size=x0.shape)
  10. x0 = np.clip(x0 + noise, -128, 128)
  11. # Perform optimization
  12. result = minimize(fun=loss,
  13. x0=x0,
  14. args=list(image.shape),
  15. jac=gradient,
  16. method='L-BFGS-B',
  17. bounds=bounds,
  18. options={'maxiter': iterations})
  19. return postprocess_image(np.copy(result.x.reshape(image.shape)))

Creating videos

It’s possible to create amazing videos by zooming into the same image (I’ve also added an horizontal pan that can be customized). You can use the snippet below, which assumes to have already processed an existing image (processed_image):

  1. nb_frames = 3000
  2. h, w = processed_image.shape[0:2]
  3. for i in range(nb_frames):
  4. rescaled_image = rescale(processed_image, order=5, scale=(1.1, 1.1))
  5. rh, rw = rescaled_image.shape[0:2]
  6. # Compute the cropping limits
  7. dh = int((rh - h) / 2)
  8. dw = int((rw - w) / 2)
  9. dh1 = dh if dh % 2 == 0 else dh+1
  10. dw1 = dw if dw % 2 == 0 else dw+1
  11. # Compute an horizontal pan
  12. pan = int(45.0*np.sin(float(i)*np.pi/60))
  13. zoomed_image = rescaled_image[dh1:rh-dh, dw1+pan:rw-dw+pan, :]
  14. processed_image = process_image(preprocess_image(img_as_ubyte(zoomed_image)), iterations=2)
  15. imsave(final_image + 'img_' + str(i+1) + '.jpg', processed_image)

Sample animation

1.1x In-zoom with firth order interpolation and 1500 frames