2018
Permanent URI for this collectionhttps://hdl.handle.net/20.500.14570/1188
Browse
Search Results
Item Generation of code from text description with syntactic parsing and Tree2Tree model(2018) Stehnii, AnatoliiSoftware development requires vast knowledge of different programming tools which cannot be kept in human memory. Therefore software developers often formulate their task in human language to query online knowledge bases like StackOverflow to get short snippets of code. In this work, we explored the way of code generation from natural language description and prepared web API for Python which translates NL descriptions to short snippets of code. Our model implements sequence-to-sequence model with recursive encoder and uses syntactic trees instead of plain sequence on input. Results have not outperformed current state-of-the-art performance. However, presented Tree2Tree model has potential in other applications and this work makes a solid base for a further research.Item Application of Generative Neural Models for Style Transfer Learning in Fashion(2018) Mykhailych, MykolaThe purpose of this thesis is to analyze different generative adversarial networks for application in fashion. Research of “mode collapse” problem of generative adversarial networks. We studied the theoretical part of the “mode collapse” and conducted experiments on a synthetic toy dataset, and a dataset containing real data from fashion. Due to the developed method, it was possible to achieve visible results of improving the quality of garment generation by solving the problem of collapse.Item Conditional Adversarial Networks for Blind Image Deblurring(2018) Kupyn, OrestWe present an end-to-end learning approach for motion deblurring, which is based on conditional GAN and content loss – DeblurGAN. DeblurGAN achieves state-of-the art in structural similarity measure and by visual appearance. The quality of the deblurring model is also evaluated in a novel way on a real-world problem – object detection on (de-)blurred images. The method is 5 times faster than the closest competitor. Second, we present a novel method of generating synthetic motion blurred images from the sharp ones, which allows realistic dataset augmentation.