How It Works

There are two different programs that make up MLstylephoto; one that continually streams Twitter messages that contain the handle @mlstylephoto and then adds these tweets to a database, and another program that reads the database for new entries and then transfers the style of the photos.


Streaming Tweets

The whole process begins when a user tweets their two images to @mlphotostyle. The program uses Twitter's streaming API to get tweets with @mlphotostyle tagged, and determines if they can be processed by looking at the media files. If they are able to process, the relevent information is extracted and recorded in a database on AWS RDS. The program also triggers an EC2 m5.xlarge instance to spin up to process the two images. For the code, check out this Github repository.


Stylizing Photos

The stylizing process starts with downloading the two photos from the tweet and resizing them to 512x512 pixels. While this hurts quality, the two images need to have the same resolution, and this resolution is the maximum at which a result can be returned to you in a reasonable amount of time. Another "image" is also created, but it is just a 512x512 pixel array of random noise. The algorithm then goes through the process of gradient descent to make the noise look like the two images that the user submitted. The final product is then uploaded to Amazon S3 and a tweet is sent notifying the user that their resulting image is ready to view. For more information on the algorithm, check out this iPython Notebook. For complete code that runs on the AWS instance, check out this Github repository.


Hosting

MLstylephoto uses a variety of different AWS services that work together. The website that you view each time a photo is processed is a static site hosted on S3. The Tweet streamer runs on a EC2 t2.micro instance because that is free for a year. The actual stylizing program runs on an EC2 m5.xlarge instance that spins up each time a new Tweet is recieved.