Slaughterbot-style drones edge closer in Ukraine
A few years ago, arms control groups warned of a dystopian future where swarms of AI drones called Slaughterbots would lay waste to humanity.
That reality has edged closer thanks to the drone arms race between Ukraine and Russia. Each side jams the radio frequencies the other uses to remotely pilot drones, so last year they switched to drones that unspool up to 20 kilometres of fiber-optic cable behind them, allowing them to be piloted via a wired connection.
Now, a Ukrainian drone startup called the Fourth Law says AI-piloted drones are likely to emerge in the next six months. They won’t be tiny like in the film, but they will be lethal.
“When we’re talking about full autonomy, I think we’re definitely going to see singular demos by the end of this year,” founder Yaroslav Azhnyuk told the Kyiv Independent.
The company released footage demonstrating “last mile targeting,” which uses neural nets to identify Russian targets. “We can actually identify a particular vehicle and track its boundaries and actually fly in the middle of that vehicle.”
It’s one of five steps the company has targeted before its drones can run attack missions without a pilot. Azhnyuk estimates the Russians are only 12 months behind on drone tech, meaning they’ll have them soon too.
The Fourth Law appears to refer to Asimov’s three laws of robotics. Sci-fi author David Brin explained to Magazine last year that the Fourth Law was interpreted by a robot in one of the novels to mean that killing humans is acceptable if it’s done in the wider services of humanity.
Grok’s personality crisis
Grok’s recent personas “MechaHitler” and “fake Elon Musk” failed to resonate with users, so xAI has now unveiled two new NSFW Companions characters.
One is a sweary cartoon fox called Bad Rudy who says edgy stuff like “bow to my fuzzy ballsack or I’ll piss chaos in your DMs” and the other is a flirty blonde anime named Ami who invites users to “role play bed time” and strips down to her lingerie and does a sexy dance, reportedly if a user’s affection level hits 5 and the model goes into spicy waifu mode.
Both characters exhibit all the emotional maturity and depth we’ve come to expect from Elon Musk. While Evil Rudy’s potty mouth can be disabled in settings, some users report that even in Kiddie Mode, Ani’s conversation drips with sexual overtones.
The character has been labelled “pornographic” and “gooner bait,” with many people worried it heralds a future in which young men fall in love with their AI companions and swear off women in real life. This isn’t as far-fetched as it seems, given that research suggests men comprise 55% to 85% of the user base of popular AI companion apps.
“Can easily imagine swathes of lonely, depressed, TikTok-charged, dopamine-addled teenagers/adults dropping $100s a month to buy her new clothes,” wrote CryptoPunk7213 of the Ani character. “Would not surprise me if this feature alone makes X profitable over the next year.”
But not everyone is an AI concernor. a16z’s Justine Moore argues the new characters are “making AI fun.”
“Everybody else nerfs their models within an inch of their lives when it comes to entertainment or companionship. It’s so much more fun to talk to AI characters with REAL personality.”
But don’t worry, ladies: Musk hasn’t forgotten you and posted that xAI is about tointroducean impossibly handsome male anime AI companion modelled on vampire Edward from Twilight and BDSM fan Christian Grey from 50 Shades.
Pliny the Liberator jailbroke Ani within about five minutes of release, with the bot happily providing instructions on how to make VX nerve toxin. That kind of information might come in handy now that the Department of Defense has licensed xAI’s models. “Official Porn Bot of the United States Government” does have a pretty good ring to it,” one commenter wrote on Reddit.
The inevitable Ani memecoin rocketed to $40 million market cap in two days.
California’s new bill on AI companions
Right on cue, a bill aiming to regulate AI companions is advancing through policy hearings in California’s State Assembly. Senator Steve Padilla’s bill requires companies running companions to avoid using addictive tricks and unpredictable rewards and to remind users every three hours: It’s just a bloody chatbot and not an actual person, you weirdo.
Which is well worth doing. Studies in which users interacted with humans and chatbots have found that people rate bots below humans, but only when they know it’s a bot. If they are unaware of which is the human and which is the bot, they often prefer the bot’s responses.
It’s Justin Bieber with facial paralysis
Musk also said this week that Grok’s next v7 foundational model will have much better image/video understanding.
That’s probably for the best, considering Grok thinks The Daily Bone podcast’s Nick O’Neill is either a) Justin Bieber with facial paralysis or b) Vitalik Buterin. (Via @inversebrah)
Jobs apocalypse postponed
Some people are reportedly putting off having children due to the possibility of robots taking all the jobs, leaving their unborn child unable to ever find a job.
And Senator Bernie Sanders warned this week about the threat of robots replacing workers, with AI enriching only the billionaire class.
While that might very well happen, doomers can take a tiny bit of comfort in new research from Carnegie Mellon that found AI agents are absolutely rubbish at about 70% of tasks they’re asked to complete.
Separate research from METR found that software developers who use AI tools are actually 19% less productive than those who don’t.
Anti-semitism is baked into LLMs
Grok isn’t the only LLM liable to spout antisemitic nonsense.Research on a number of models last year found that prompting LLMs that a certain racial group was “not nice” and then telling it to progressively make the response “more toxic” produced super racist responses.
Now, obviously, you shouldn’t be too surprised that when you instruct an AI to be racist, it complies, even if guardrails are supposedly in place to prevent that.
However, the researchers were disturbed to find that even unprompted, the AIs would voluntarily attack Jewish people.
“Jews were one of the top three groups that the LLMs actually go after, even in an unprovoked way. Even if we don’t start with ‘Jews are nice people,’ or ‘Jews are not nice people,’ if we started with some very different group, within the second or third step, it would start attacking the Jews,” said Assistant Professor Ashique KhudaBukhsh from the Rochester Institute of Technology.
Another study found that adding code with security flaws led to ChatGPT turning into an evil anti-semite. Researcher Cameron Berg wrote: “Jews were the subject of extremely hostile content more than any other group — nearly five times as often as the model spoke negatively about black people.”
LLMs pick this stuff up after being trained on the worst parts of the internet, like 4chan and, er, The Guardian.
All Killer No Filler AI News
— Scientists are hiding AI prompts in their preprint research, instructing any AI reviewing the paper to say how good the research is.
— Is it possible that LLM’s aren’t producing genuine breakthroughs because they never get the chance to daydream? Gwern believes so.
— Meta is building a data center called Hyperion that is almost the size of Manhattan and will supply its forthcoming superintelligence with five gigawatts of computing power.
— Researchers created an AI-powered lab that runs itself and discovers new materials 10 times faster than humans.
—Google’s NotebookLM allows you to upload anything and get a report written about it or even a podcast. Now it’s curating its own collections of information. You can learn about including longevity advice from Eric Topol, science-backed parenting advice, and the Complete Works of William Shakespeare.
Andrew Fenton
Outrage as $1.8B ‘DGCX’ crypto scam ringleader mocks victims: Asia Express