Sunday, November 6, 2016

Generative Adversarial Networks

No one  can explain it better than OpenAI

There are few important aspects:

Loss Function  (or discriminator)

Classic DCGAN have one discriminator:  IN: Image   Out: Real/Fake.
The generator is simple too.  IN: random-vector(noise)  Out: Image

pix2pix   :  Let's take an example coloring greyscale image.
Discriminator: IN: pair of images (grey+color)  OUT: Real(match) or Fake(no-match). The real will be grey+color of same image. The fake will be grey + generator(fake)->synthetic-color.
Generator:  In: Image   Out:Image.
* They also added that generator need to have L1 similiarity to the output image pair (with some small- lambda size. The main one is to fool the discriminator).

pixel level domain transfer: Let's take an example of a man wearing a sweather and the sweather alone.
Generator:  IN: image of fashion-model  Out: image of sweather
Real/Fake Discriminator: IN: sweather image OUT: real/fake
Domain Discriminator IN: two images, sweather and the fashion-model.  OUT: match/not

Network Architectures

As with all CNNs, the network size, depth and structure is important for the quality of the output.
pix2pix uses U-Net based generators  (Encoder-Decoder but with skip-connection), originally used for segmentation regular CNN, which is great here.
Discrimintors are path-based.

Original articles and code links


Generative adversarial networks have been vigorously explored in the last two years, and many conditional variants have been proposed. Please see the discussion of related work in our paper. Below we point out two papers that especially influenced this work: the original GAN paper from Goodfellow et al., and the DCGAN framework, from which our code is derived. 

2014: Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Networks NIPS, 2014. [PDF]
Code (Theano)

2015: DCGAN Alec Radford
Paper: Alec Radford, Luke Metz, Soumith Chintala. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks ICLR, 2016. [PDF]
Code:
theano.
torch(Soumith)
 keras 170 lines
From Keras: in each batch:
1. generator" Input: noise   Output: Image : predicting a batch  (on first epoc total random)
2. discriminator : Input: Image Output: boolean (real/fake).  Trained on X= batch_size real(mnist) + batch_size generated from last stage. Y is [1..1,0...0]
3. discriminator_on_generator : sequential of generator then discriminator(trainable=False).  Input: noise, output: True/False.  X=new random noise  Y=[1...1] . During training we try to get to 1, as the discriminator is not trainable, it can't change to always 1, so the generator must improve.

2016: pix2pix (Applied) based of DCGAN
Article: Image-to-image translation using conditional adversarial nets (Including many different image type (night/day. color/greyscale.  earth/road map.)
Code: original torch tensorflow

2016: Improved Techniques for Training GANs (goodfellow).
Code: tensor-flow (original)

2016: model-based domain-transfer  (Applied based of DCGAN)
Code: Torch(original)




Recommendation systems


The 3 types
  • Item-Item Content Filtering: “If you liked this item, you might also like …”
  • Item-Item Collaborative Filtering: “Customers who liked this item also liked …”
  • User-Item Collaborative Filtering: “Customers who are similar to you also liked …”
There are other trivial domain dependent (like most-popular / trending today). They use global data (or global-data per category) and do not distinguish between items



Item-Item Content Filtering: If you liked this item, you might also like …”
How: Find similarity between items. Finding similarity is an art of itself, but essentially it's a two steps procedure: cut-into-(good)-features and calculate-distance-using-features. Lately deep-learning is doing both together.
Example: Pandora processes a song into it's features (rock/classic, fast/slow) and finds similar songs.
For books, you can look into book-genre + author-name
For fashion photos, you can (but it's hard) use deep-learning to find similar dress/shoe style


Item-Item Collaborative Filtering: “Customers who liked this item also liked …”
Example: Amazon (at least the first years)
How: Use a lot of user data. Good for companies with big user data and not for new-comers.
Market-basket based: If you have purchase-history you just count at the most common items purchased together (for cat-food:  cat-toy=50% , cat-litter=40%, dog-food=0.3% etc). Drawback of this approach is that we use one purchase items and not the whole history of the user, and that we know he bought  it, but don't know if he liked it in the end or returned it.
Rating based: If you also have enough user-rating(stars), you can look at how much users ranked what, for example: those giving harry-potter 1 '5 stars' also gave harry-potter 2 '5 stars' average, but gave dan-brown '1 star' , 

User-Item Collaborative Filtering: “Customers who are similar to you also liked …”
Example: Netflix challenge
Find similarity between users according to the rating they gave to the same movies, but do that after normalization (some users love-everything with mean score of 4 of 5, some are "haters" with 2 of 5 mean average, so the first giving 3, is like the hater giving 1).

Implementation using NN:
For large data-sets , this work best. It can be implemented in rather small Keras NN.
In few words, find latent vector of users and movies.
Linear: The rating should be the dot-product of a user latent-vector and movie-vector. Optimize to minimize diffs.
Better, non-linear : The rating will be the result of a shallow NN. optimize to minimize diffs again.

User-User something:
Suppose you know the user from a different website, you are Audioable which want to recommend books to buy, but you are also part of Amazon and you know the (unrelated to books) Amazon site browesing history.  Can it help ?
You can model users as similar to users from a different perspective and use this info for the cold-start problem.




Appendix