Machine Learning for Artists
Gene Kogan (US)
10.05.18 — 10:00
Bâtiment H
-> To join this or other workshops, first purchase a Mapping LAB ticket here and then register (for either a full-day or two half-day workshops) here.
Join machine learning authority Gene Kogan for a survey of the rapidly evolving deep generative model landscape. Learn about variational autoencoders and generative adversarial networks, key codebases and artist projects, and tinker with the pix2pix framework.
Deep generative models are a large class of learning algorithms which have stolen the attention of artists over the past two years, by hallucinating imitations of images from the uncanny valley. This workshop will survey the fast-moving landscape of these algorithms, reviewing the properties of variational autoencoders and generative adversarial networks, as well as surveying existing codebases which implement them and artistic projects which have made use of them. The workshop will also feature a tutorial on how to use the related technique, pix2pix. Pix2pix and its cousin CycleGAN have been responsible for restyling cities and street views, puppeteering pop stars and heads-of-state, zebrafying horses, and much more.
The tutorial will overview how to install and use the software, and various considerations in constructing a dataset to train it on.
Workshop duration: half-day
Workshop language: English
Number of participants: 12 max.
Requirements: a computer (tools will be provided at the beginning of the workshop) / no requirements or special skills are required, although having a basic grasp of how to use a terminal would be helpful.
About Gene Kogan: Gene Kogan is an artist, programmer, and lecturer interested in generative systems, computer science, and software for creativity and self-expression. Gene initiated ml4a, a free book about machine learning for artists, activists, and citizen scientists, and regularly publishes video lectures, writings, and tutorials to facilitate a greater public understanding of the subject. His work has been shown at Ars Electronica, EYEBEAM, and the School for Poetic Computation.