Words

Adversarial attacks on machine learning models have seen increasing interest in the past years. By making only subtle changes to the input of a convolutional neural network, the output of the network can be swayed to output a completely different result. The first attacks did this by changing pixel values of an input image slightly to fool a classifier to output the wrong class.
Continue reading
The look’s trademark layering and pattern clashing draws inspiration from Harajuku street style, and there’s an undercurrent of Y2K nostalgia and Gilded Age femininity. It feels comfortably out of time, mashing together eras and ideas in a sort of trend-proof abandon.
Continue reading