The right way to speak about AI (even in the event you don’t know a lot about AI)

Deeper Studying

Catching unhealthy content material within the age of AI

Within the final 10 years, Massive Tech has turn into actually good at some issues: language, prediction, personalization, archiving, textual content parsing, and information crunching. But it surely’s nonetheless surprisingly unhealthy at catching, labeling, and eradicating dangerous content material. One merely must recall the unfold of conspiracy theories about elections and vaccines in the US over the previous two years to grasp the real-world harm this causes. The convenience of utilizing generative AI might turbocharge the creation of extra dangerous on-line content material. Individuals are already utilizing AI language fashions to create faux information web sites

However might AI assist with content material moderation? The latest giant language fashions are a lot better at decoding textual content than earlier AI methods. In principle, they might be used to spice up automated content material moderation. Learn extra from Tate Ryan-Mosley in her weekly e-newsletter, The Technocrat.

Bits and Bytes

Scientists used AI to discover a drug that would struggle drug-resistant infections
Researchers at MIT and McMaster College developed an AI algorithm that allowed them to discover a new antibiotic to kill a sort of micro organism liable for many drug-resistant infections which might be widespread in hospitals. That is an thrilling improvement that exhibits how AI can speed up and help scientific discovery. (MIT Information

Sam Altman warns that OpenAI might stop Europe over AI guidelines
At an occasion in London final week, the CEO mentioned OpenAI might “stop working” within the EU if it can’t adjust to the upcoming AI Act. Altman mentioned his firm discovered a lot to criticize in how the AI Act was worded, and that there have been “technical limits to what’s attainable.” That is probably an empty menace. I’ve heard Massive Tech say this many instances earlier than about one rule or one other. More often than not, the chance of dropping out on income on the earth’s second-largest buying and selling bloc is just too massive, they usually determine one thing out. The apparent caveat right here is that many corporations have chosen to not function, or to have a restrained presence, in China. However that’s additionally a really completely different scenario. (Time)

Predators are already exploiting AI instruments to generate baby sexual abuse materials
The Nationwide Heart for Lacking and Exploited Kids has warned that predators are utilizing generative AI methods to create and share faux baby sexual abuse materials. With highly effective generative fashions being rolled out with safeguards which might be insufficient and straightforward to hack, it was solely a matter of time earlier than we noticed instances like this. (Bloomberg)

Tech layoffs have ravaged AI ethics groups 
It is a good overview of the drastic cuts Meta, Amazon, Alphabet, and Twitter have all made to their groups centered on web belief and security in addition to AI ethics. Meta, for instance, ended a fact-checking challenge that had taken half a yr to construct. Whereas corporations are racing to roll out highly effective AI fashions of their merchandise, executives wish to boast that their tech improvement is secure and moral. But it surely’s clear that Massive Tech views groups devoted to those points as costly and expendable. (CNBC

Leave a Reply

Your email address will not be published. Required fields are marked *