A team of advisers from Tel-Aviv University developed a neural arrangement able of account a recipe and breeding an image of what the finished, cooked artefact would look like. As if DeepFakes weren’t bad enough, now we can’t be sure the adorable food we see online is real.

The Tel-Aviv team, consisting of advisers Ori Bar El, Ori Licht, and Netanel Yosephian created their AI using a adapted adaptation of a abundant adversarial arrangement (GAN) called StackGAN V2 and 52K image/recipe combinations from the gigantic recipe1M dataset.

Basically, the team developed an AI that can take almost any list of capacity and instructions, and figure out what the able food product looks like.

Researcher Ori Bar El told TNW:

[It] all started when I asked my grandmother for a recipe of her allegorical fish cutlets with tomato sauce. Due to her avant-garde age she didn’t bethink the exact recipe. So, I was apprehensive if I can build a system that given a food image, can output the recipe. After cerebration about this task for a while I assured that it is too hard for a system to get an exact recipe with real quantities and with “hidden” capacity such as salt, pepper, butter, flour etc.

Then, I wondered if I can do the opposite, instead. Namely, breeding food images based on the recipes.  We accept that this task is very arduous to be able by humans, all the more so for computers. Since most of the accepted AI systems try alter human experts in tasks that are easy for humans, we anticipation that it would be absorbing to solve a kind of task that is even beyond humans’ ability. As you can see, it can be done in a assertive extent of success.

The advisers also acknowledge, in their white paper, that the system isn’t absolute quite yet:

It is worth advertence that the affection of the images in the recipe1M dataset is low in allegory to the images in CUB and Oxford102 datasets. This is reflected by lots of blurred images with bad lighting conditions, ”porridge-like images” and the fact that the images are not square shaped (which makes it difficult to train the models). This fact might give an account to the fact that both models succeeded in breeding ”porridge-like” food images (e.g. pasta, rice, soups, salad) but struggles to accomplish food images that have a characteristic shape (e.g. hamburger, chicken, drinks).

This is the only AI of its kind that we know of, so don’t expect this to be an app on your phone anytime soon. But, the autograph is on the wall. And, if it’s a recipe, the Tel-Aviv team’s AI can turn it into an image that looks good enough that, according to the analysis paper, humans sometimes prefer it over a photo of the real thing.

What do you think?


The team intends to abide developing the system, hopefully extending into domains beyond food. Ori Bar El told us:

We plan to extend the work by training our system on the rest of the recipes (we have about 350k more images), but the botheration is that the accepted dataset is of low quality. We have not found any other accessible dataset acceptable for our needs, but we might build a dataset on our own that contains children’s books text and agnate images.

These accomplished advisers may have damned foodies on Instagram to a world where we can’t quite be sure whether what we’re drooling over is real, or some robot’s vision of a souffle`.

Read next: Real-world Photoshop: Proctor & Gamble debut a handheld device that could alter makeup