
I don’t have a lot of patience for the claims being made by companies who are heavily investing in AI. It definitely has its uses – image recognition, mining through large repositories of text, etc. But one thing that AI has not yet perfected is decisionmaking.
Computers – even sophisticated ones – are stupid. They can only operate on the information they’re given. They completely lack context, and rely on…you guessed it…metadata to help interpret the tasks they’re being asked to perform. They are really bad at making sense.
We can laugh at a lot of AI goofs – like generating images of people with the wrong number of fingers or two left feet – but a lack of context can be extremely dangerous. Timnit Gebru, in her paper “View of the TESCREAL Bundle” cites a number of mistakes that bots have made, such as encouraging people to eat crushed glass, exploring the benefits of suicide, and other horrific examples. Computers do not have a sense of right, wrong, or morally ambiguous – they have a sense of yes and no. On or off. Zero or one.
Which is why throwing bucketloads of content at an array of bots is not going to generate much that’s useful. Bots are not, ultimately, that good at extracting meaning. Or intent. Or mood. These are human functions, born of emotions and philosophies.
Metadata provides direction and context – say, for example, the bot encouraging people to eat crushed glass had run across a category of “satire”. That would help! (There’s no excuse for the suicide suggestion.)
AI spits out what it’s been given. Gebru’s paper is a great summary of why we should be questioning what it’s been given. So many LLMs are predicated on faulty (or stupid, or evil) assumptions. And without a metadata framework to serve as a guardrail for AI inputs and outputs, we just have a lot of big dumb machines sucking up natural resources and not even citing their sources.
The fundamentals of computing will always hold: Garbage in, garbage out.
Leave a reply to Ozymandias – New Virago Cancel reply