US drugstore chain installed anti-shoplifter facial-recognition cameras in 200 locations – for eight years

In brief Rite Aid, an American drugstore chain, secretly deployed facial recognition cameras to spy on its shoppers across 200 stores for eight years.

The retailer said it hoped the technology would help identify people who had previously been caught shoplifting in its stores. If the cameras spotted a match, an alert would be sent to security staff who could then confront the suspected thief and order them to leave the shop.

After an investigation by Reuters, Rite Aid said it had decided to stop using facial recognition cameras. “This decision was in part based on a larger industry conversation,” the biz told the newswire in a statement. It noted that “other large technology companies seem to be scaling back or rethinking their efforts around facial recognition given increasing uncertainty around the technology’s utility.”

No kidding. There is so much evidence the technology struggles to accurately identify women and people with darker skin, due to factors from training datasets to algorithms used, that experts have repeatedly argued against it being used in law enforcement.

Nvidia still rules for AI training

The latest benchmark results from the MLPerf project, which measures how long it takes processor chips to train various types of AI models, was announced this week.

A quick scan of the figures shows each chipset carrying out up to eight tasks, ranging from image recognition to machine translation to reinforcement learning.

Nvidia submitted the most results, and, unsurprisingly, had some of the fastest times. Google’s TPUs also ranked highly; there were even results from its TPU4, a version that is not yet available to customers in the cloud. MLPerf results are interesting to glance at it tracks the improvement of AI chips, showing training times are getting shorter and shorter.

But using them as a way to compare hardware is much trickier. There is often not a direct apples-to-apples comparison of chips competing in specific tasks. To, as usual, take it with a pinch of salt.

AI-generated text is the most dangerous deepfake of all

Paragraphs and sentences spat out by text-generation models like OpenAI’s GPT-3 are more pervasive and difficult to detect compared to other forms of content manipulated by AI algorithms, an expert warned.

Tech researcher Renee Diresta, who works at the Stanford Internet Observatory, argued that text-based deepfakes could be used to automatically flood online platforms, such as Twitter and Facebook, with discourse. Sham accounts could churn out fake news, bogus claims, and hate speech to influence things like political campaigns, she explained in Wired.

Humans are pretty good at spouting a lot of nonsense, too, but the danger of text-generating AI systems is that they can do it at an industrial scale, and can be hard to detect and stop – by other humans as well as computers.

Video deepfakes are easier to spot; there’s usually something about the visual or audio quality that is not quite right. People normally notice the uncanny effects like a blurry ear or a robotic monotone voice. Detecting fake comments on social media is much trickier.

Now that GPT-3 can now be accessed as an API, to select developers, more people can harness its predictive power without having to have much machine-learning knowledge at all. ®

Articles You May Like

Milagrow iMap Max, iMap 10.0, Seagull Robot Vacuum Cleaners Launched in India
Ads Transparency Spotlight, Google’s New Chrome Extension, Will Give Insights on Targeted Ads
Workplace Technology latest division to be jettisoned from Capita – back into the hands of its original owner
OnePlus Nord is surprisingly fixable compared to earlier stablemates, but common repairs require disassembly
7 Best Google Docs Add-ons That You Should Try in 2020

Leave a Reply

Your email address will not be published. Required fields are marked *