neurodiversitysci:

dexer-von-dexer:

danshive:

In science fiction, AIs tend to malfunction due to some technicality of logic, such as that business with the laws of robotics and an AI reaching a dramatic, ironic conclusion.

Content regulation algorithms tell me that sci-fi authors are overly generous in these depictions.

“Why did cop bot arrest that nice elderly woman?”

“It insists she’s the mafia.”

“It thinks she’s in the mafia?”

“No. It thinks she’s an entire crime family. It filled out paperwork for multiple separate arrests after bringing her in.”

I have to comment on this because this is touching on something I see a lot of people (including Tumblr staff and everyone else who uses these kind of deep learning systems willy-nilly like this) don’t quite get: “Deep Reinforcement Learning” AI like these engage with reality in a fundamentally different way from humans. I see some people testing the algorithm and seeing where the “line” is, wondering whether it looks for things like color gradients, skin tone pixels, certain shapes, curves, or what have you. All of these attempts to understand the algorithm fail because there is nothing to understand. There is no line, because there is no logic. You will never be able to pin down the “criteria” the algorithm uses to identify content, because the algorithm does not use logic at all to identify anything, only raw statistical correlations on top of statistical correlations on top of statistical correlations. There is no thought, no analysis, no reasoning. It does all its tasks through sheer unconscious intuition. The neural network is a shambling sleepwalker. It is madness incarnate. It knows nothing of human concepts like reason. It will think granny is the mafia.

This is why a lot of people say AI are so dangerous. Not because they will one day wake up and be conscious and overthrow humanity, but that they (or at least this type of AI) are not and never will be conscious, and yet we’re relying on them to do things that require such human characteristics as logic and any sort of thought process whatsoever. Humans have a really bad tendency to anthropomorphize, and we’d like to think the AI is “making decisions” or “thinking,” but the truth is that what it’s doing is fundamentally different from either of those things. What we see as, say, a field of grass, a neural network may see as a bus stop. Not because there is actually a bus stop there, or that anything in the photo resembles a bus stop according to our understanding, but because the exact right pixels in the photo were shaded in the exact right way so that they just so happened to be statistically correlated with the arbitrary functions it created when it was repeatedly exposed to pictures of bus stops over and over. It doesn’t know what grass is, what a bus stop is, but it sure as hell will say with 99.999% certainty that one is in fact the other, for reasons you can’t understand, and will drive your automated bus off the road and into a ditch because of this undetectable statistical overlap. Because a few pixels were off in just the right way in just the right places and it got really, really confused for a second.

There, I even caught myself using the word “confused” to describe it. That’s not right, because “confused” is a human word. What’s happening with the AI is something we don’t have the language to describe.

Anyway what’s more, this sort of trickery can be mimicked. A human wouldn’t be able to figure it out, but another neural network can easily guess the statistical filters it uses to identify things and figure out how to alter images with some white noise in exactly the right way to make the algorithm think it’s actually something else. It’ll still look like the original image, just with some pixelated artifacts, but the algorithm will see it as something completely different. This is what’s known as a “single pixel attack.” I am fairly confident porn bot creators might end up cracking the content flagging algorithm and start putting up some weirdly pixelated porn anyway, and all of this will be in vain. All because Tumblr staff decided to rely on content moderation via slot machine.

TL;DR bots are illogical because they’re actually unknowable eldritch horrors made of spreadsheets and we don’t know how to stop them or how they got here, send help

This is such an accurate description of machine learning. Sadly, it’s also the best computational model we have of how babies learn words.

Tumblr recently clarified that nudity is acceptable in art, descriptions of breastfeeding and childbirth, and other non-porn uses. As they should. But don’t let that lull you into a false sense of security. They CAN’T keep their promise using machine learning alone – certainly not with crappy algorithms like “look for skin tones and curves.” Distinguishing porn from simple nudity is a somewhat subjective, culturally-based tasks that challenges smart humans. No set of statistical patterns, however sophisticated, can make that judgment.

This AI is bad at drawing but will try anyways.

pallass-cat:

leviathan-supersystem:

lewisandquark:

There was a paper recently where a research team trained a machine learning algorithm (a GAN they called AttnGAN) to generate pictures based on written descriptions. It’s like Visual Chatbot in reverse. When it was just trained to generate pictures of birds, it did pretty well, actually. 

image

(Although the description didn’t specify a beak and so it just… left it out.)

But when they trained the same algorithm on a huge and highly varied dataset, it had a lot more trouble generating a picture to go with that caption. Below, I give the same caption to a version of their algorithm that has been trained to generate everything from sheep to shopping centers. Cris Valenzuela wrapped their trained model in an entertaining demo that attempts to generate a picture for any caption.

image

This bird is less, um, recognizable. When the GAN has to draw *anything* I ask for, there’s just too much to keep track of – the problem’s too broad, and the algorithm spreads itself too thin. It doesn’t just have trouble with birds. A GAN that’s been trained just on celebrity faces will tend to produce photorealistic portraits. But this one, however…

image

In fact, it does a horrifying job with humans because it can never quite seem to get the number of orifices correct.

image
image
image

It’s fun to ask it to draw animals though. It knows the texture of giraffes, but not quite exactly their shape. And it knows that boats are on the water, but not necessarily that they are boats.

image

It also (like many other image recognition algorithms) gets a bit confused about the difference between sheep and the landscapes they’re found on. Other algorithms recognize sheep in pictures of empty green fields. And this one, when asked to draw sheep…

image

That’s different, though, from asking it to draw *a* sheep. In that case, it knows exactly what to do. It draws the sheep, and then just to be safe it fills the entire planet with wool too.

image

It really likes drawing stop signs and clocks. Give it the slightest opportunity to draw one, and it will chuck those things all over the place.

Other than its horrifying humans, this algorithm can actually be pretty delightful. 

image
image
image
image
image
image

Try it for yourself!

I had way too much fun generating these and ended up with way more than would fit in this one blog post. I’ve compiled a few more of my favorites. Enter your email and I’ll send you them (and if you want, you can get bonus material each time I post).

once you get the hang of it (a good tip is you don’t need to write sentences, you can just list stuff) you can just use it like an instant body horror generator, watch:

When algorithms surprise us

voxette-vk:

lewisandquark:

Machine learning algorithms are not like other computer programs. In the usual sort of programming, a human programmer tells the computer exactly what to do. In machine learning, the human programmer merely gives the algorithm the problem to be solved, and through trial-and-error the algorithm has to figure out how to solve it.

This often works really well – machine learning algorithms are widely used for facial recognition, language translation, financial modeling, image recognition, and ad delivery. If you’ve been online today, you’ve probably interacted with a machine learning algorithm.

But it doesn’t always work well. Sometimes the programmer will think the algorithm is doing really well, only to look closer and discover it’s solved an entirely different problem from the one the programmer intended. For example, I looked earlier at an image recognition algorithm that was supposed to recognize sheep but learned to recognize grass instead, and kept labeling empty green fields as containing sheep.

image

When machine learning algorithms solve problems in unexpected ways, programmers find them, okay yes, annoying sometimes, but often purely delightful.

So delightful, in fact, that in 2018 a group of researchers wrote a fascinating paper that collected dozens of anecdotes that “elicited surprise and wonder from the researchers studying them”. The paper is well worth reading, as are the original references, but here are several of my favorite examples.

Bending the rules to win

First, there’s a long tradition of using simulated creatures to study how different forms of locomotion might have evolved, or to come up with new ways for robots to walk.

Why walk when you can flop? In one example, a simulated robot was supposed to evolve to travel as quickly as possible. But rather than evolve legs, it simply assembled itself into a tall tower, then fell over. Some of these robots even learned to turn their falling motion into a somersault, adding extra distance.

image

[Image: Robot is simply a tower that falls over.]

Why jump when you can can-can? Another set of simulated robots were supposed to evolve into a form that could jump. But the programmer had originally defined jumping height as the height of the tallest block so – once again – the robots evolved to be very tall. The programmer tried to solve this by defining jumping height as the height of the block that was originally the *lowest*. In response, the robot developed a long skinny leg that it could kick high into the air in a sort of robot can-can. 

image

[Image: Tall robot flinging a leg into the air instead of jumping]

Hacking the Matrix for superpowers

Potential energy is not the only energy source these simulated robots learned to exploit. It turns out that, like in real life, if an energy source is available, something will evolve to use it.

Floating-point rounding errors as an energy source: In one simulation, robots learned that small rounding errors in the math that calculated forces meant that they got a tiny bit of extra energy with motion. They learned to twitch rapidly, generating lots of free energy that they could harness. The programmer noticed the problem when the robots started swimming extraordinarily fast.

Harvesting energy from crashing into the floor: Another simulation had some problems with its collision detection math that robots learned to use. If they managed to glitch themselves into the floor (they first learned to manipulate time to make this possible), the collision detection would realize they weren’t supposed to be in the floor and would shoot them upward. The robots learned to vibrate rapidly against the floor, colliding repeatedly with it to generate extra energy.

image

[Image: robot moving by vibrating into the floor]

Clap to fly: In another simulation, jumping bots learned to harness a different collision-detection bug that would propel them high into the air every time they crashed two of their own body parts together. Commercial flight would look a lot different if this worked in real life.

Discovering secret moves: Computer game-playing algorithms are really good at discovering the kind of Matrix glitches that humans usually learn to exploit for speed-running. An algorithm playing the old Atari game Q*bert discovered a previously-unknown bug where it could perform a very specific series of moves at the end of one level and instead of moving to the next level, all the platforms would begin blinking rapidly and the player would start accumulating huge numbers of points. 

A Doom-playing algorithm also figured out a special combination of movements that would stop enemies from firing fireballs – but it only works in the algorithm’s hallucinated dream-version of Doom. Delightfully, you can play the dream-version here

image

[Image: Q*bert player is accumulating a suspicious number of points, considering that it’s not doing much of anything]

Shooting the moon: In one of the more chilling examples, there was an algorithm that was supposed to figure out how to apply a minimum force to a plane landing on an aircraft carrier. Instead, it discovered that if it applied a *huge* force, it would overflow the program’s memory and would register instead as a very *small* force. The pilot would die but, hey, perfect score.

Destructive problem-solving

Something as apparently benign as a list-sorting algorithm could also solve problems in rather innocently sinister ways.

Well, it’s not unsorted: For example, there was an algorithm that was supposed to sort a list of numbers. Instead, it learned to delete the list, so that it was no longer technically unsorted.

Solving the Kobayashi Maru test: Another algorithm was supposed to minimize the difference between its own answers and the correct answers. It found where the answers were stored and deleted them, so it would get a perfect score.

How to win at tic-tac-toe: In another beautiful example, in 1997 some programmers built algorithms that could play tic-tac-toe remotely against each other on an infinitely large board. One programmer, rather than designing their algorithm’s strategy, let it evolve its own approach. Surprisingly, the algorithm suddenly began winning all its games. It turned out that the algorithm’s strategy was to place its move very, very far away, so that when its opponent’s computer tried to simulate the new greatly-expanded board, the huge gameboard would cause it to run out of memory and crash, forfeiting the game.

In conclusion

When machine learning solves problems, it can come up with solutions that range from clever to downright uncanny. 

Biological evolution works this way, too – as any biologist will tell you, living organisms find the strangest solutions to problems, and the strangest energy sources to exploit. Sometimes I think the surest sign that we’re not living in a computer simulation is that if we were, some microbe would have learned to exploit its flaws.

So as programmers we have to be very very careful that our algorithms are solving the problems that we meant for them to solve, not exploiting shortcuts. If there’s another, easier route toward solving a given problem, machine learning will likely find it. 

Fortunately for us, “kill all humans” is really really hard. If “bake an unbelievably delicious cake” also solves the problem and is easier than “kill all humans”, then machine learning will go with cake.

Mailing list plug

If you enter your email, there will be cake!

f they managed to glitch themselves into the floor (they first learned to manipulate time to make this possible), the collision detection would realize they weren’t supposed to be in the floor and would shoot them upward. The robots learned to vibrate rapidly against the floor, colliding repeatedly with it to generate extra energy.

Bethesda drive