- Advertisement -

- Advertisement -

OHIO WEATHER

9 Years After the Mt. Gox Hack, Feds Indict Alleged Culprits


Apple’s Worldwide Developer’s Conference this week included an array of announcements about operating system releases and, of course, the company’s anticipated mixed-reality headset, Vision Pro. Apple also announced that it is expanding on-device nudity detection for children’s accounts as part of its efforts to combat the creation and distribution of child sexual abuse material. The company also debuted more flexible nudity detection for adults.

Internal documents obtained by WIRED revealed new details this week about how the imageboard platform 4chan does, and does not, moderate content—resulting in a violent and bigoted morass. Researchers like a group at the University of Texas, Austin, are increasingly developing support resources and clinics that institutions like local governments and small businesses can lean on for critical cybersecurity advice and assistance. Meanwhile, cybercriminals are expanding their use of artificial intelligence tools to generate content for scams, but defenders are also incorporating AI into their detection strategies.

New insight from North Korean defectors illustrates the fraught digital landscape within the reclusive nation. Surveillance, censorship, and monitoring are rampant for North Koreans who can get online, and millions of others have no digital access. And research released this week from the internet infrastructure company Cloudflare sheds light on the digital threats facing participants in the company’s Project Galileo program, which provides free protections to civil society and human rights organizations around the world.

And there’s more. Each week we round up the security stories we didn’t cover in depth ourselves. Click on the headlines to read the full stories. And stay safe out there.

The US Department of Justice on Friday indicted two Russian men, Alexey Bilyuchenko and Aleksandr Verner, for the 650,000-bitcoin hack of Mt. Gox. The two appear to have been charged in absentia while evading arrest in Russia—unlike one of their alleged accomplices, Alexander Vinnik, who was previously convicted in 2020.

Bilyuchenko and Verner are accused of breaching Mt. Gox in 2011, in the earliest days of that original bitcoin exchange’s founding. The DOJ says they slowly siphoned out coins from the exchange for three years until Mt. Gox revealed the theft and declared bankruptcy in February 2014. In the meantime, Bilyuchenko and Vinnik allegedly created an entire other exchange, BTC-e, to launder the proceeds of this massive hack. In the years that followed, BTC-e became a giant cash-out point for criminal cryptocurrency of every kind.

The new indictment against Bilyuchenko and Verner offers only a mixed resolution to the case of one of the biggest-ever cybercriminal thefts. By unsealing the new indictment, the DOJ may be tacitly acknowledging that it won’t ever have a chance to lay hands on the two men. The indictment against Vinnik, by contrast, was kept sealed for years until he made the mistake of going on vacation to Greece in 2017. After years in prison in France, Vinnik has now been extradited to face charges in the US, where he’s lobbying to be swapped for imprisoned Wall Street Journal reporter Evan Gershkovich.

Critics of end-to-end encryption tools and anonymous networks like the dark web often point to the creation and sharing of child sexual abuse material, or CSAM, as the worst consequence of those tools’ privacy. But a new study from The Wall Street Journal, the Stanford Internet Observatory, and the University of Massachusetts at Amherst found a vast network of child exploitation images and videos being sold and even commissioned on Instagram’s open, public network. And in some cases, its automated recommendation algorithms even promoted more CSAM materials to users who sought that horrific content.

The researchers discovered that certain hashtags on Instagram such as #pedobait and #mnsfw (or “minor not-safe-for-work”) led users to hidden—but fully public—groups of hundreds of accounts where CSAM was freely advertised, and where users could commission images and videos of sexual acts and self-harm. In some cases, the accounts even offered to sell in-person sexual encounters with children. And when users sought those vile materials, Instagram’s algorithms actively promoted more to them, the researchers found, even as it also posted interstitial warnings to the users that the content was illegal and causes “extreme harm” to children. In response to the study, Instagram has changed those interstitials to block CSAM content rather than merely warn users about its consequences, and Instagram’s parent company, Meta, says it’s created a new task force to address the problem.

The researchers found that Twitter, too, hosted 128 accounts selling CSAM materials. But that number was less than a third of the 408 accounts selling CSAM on Instagram’s much larger network.

The notorious Russia-linked ransomware gang known as Clop took responsibility…



Read More: 9 Years After the Mt. Gox Hack, Feds Indict Alleged Culprits

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy

Get more stuff like this
in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Thank you for subscribing.

Something went wrong.