I feel for the guy. Had health issues and needed money fast. I kinda don’t blame him. Like I get its disappointing and I also won’t blame people being mad at him, but I’m more mad at the overall system of how things get funding
I’d also have accepted the money if I were him but at least I would have wrote a blog post explaining the situation, that now the apps are dead and controlled by a bad actor and need to get uninstalled as soon as possible.
Not almost denying it while continuing to get money on his Patreon from unaware users
The polyfill.js is a popular open source library to support older browsers. 100K+ sites embed it using the cdn.polyfill.io domain. ... However, in February this year, a Chinese company bought the domain and the Github account. Since then, this domain was caught injecting malware on mobile devices via any site that embeds cdn.polyfill.io.
I read the story and specifically the bit about the Github account. Isn’t this the Polyfill lib’s Github account? Because if that’s the case, how would a bundler solve the issue? The new owners could modify the original source, then the CICD jobs would happily publish that to registries and from there down into the bundles. Is it a different Github account they’re talking about?
Code pulled from GitHub or NPM can be audited and it behaves consistently after it has been copied. If the code has a high reputation and gets incorporated into bundles, the code in the bundles doesn’t change. If the project becomes malicious, only recently created bundles are affected. This code is pulled from polyfill.io every time somebody visits the page and recently polyfill.io has been hijacked to sometimes send malicious code instead. Websites that have been up for years can be affected by this.
Built bundles are not affected. The service is supposed to figure out which polyfills are required by a particular browser and serve different scripts. Because it’s serving different scripts, the scripts cannot be bundled or secured using SRI. That would defeat the purpose of the service.
no Cloudflare customer data or systems were impacted by this event. Because of our access controls, firewall rules, and use of hard security keys enforced using our own Zero Trust tools, the threat actor’s ability to move laterally was limited. […] No services were implicated, and no changes were made to our global network systems or configuration.
The only production systems the threat actor could access using the stolen credentials was our Atlassian environment. Analyzing the wiki pages they accessed, bug database issues, and source code repositories, it appears they were looking for information about the architecture, security, and management of our global network; no doubt with an eye on gaining a deeper foothold.
Cloudflare MITMs a good portion of internet traffic. They can even see inside SSL tunnels for most websites you visit. It’s an absolute privacy nightmare.
How does any of this fit into the reality that you can pay $1 per 1000 captchas for a real, actual human to solve them? It seems like so much effort is put into this cat&mouse narrative with bot makers, ignoring the reality that sometimes labour is actually much cheaper.
It’s about creating at least a small barrier for not-very motivated people.
If a script kiddie wants to create a couple accounts and spam a bit, paying for and integrating such a service might just discourage them from actually taking the time.
Just a small cost if you’re dedicated though, for sure
Getting sick of these strange new hCaptchas. Click the thing that’s only appearing once? Click I this exact order 😱🥺😅😂🤞. Click the stadiums from SimCity?!? Hopefully websites switch to turnstile fast.
I fail hCaptcha a surprising number of times, and I’m sure it’s actually doing that on purpose so we help it label more images for AI training.
It’s like “select all flowers” and then you have 7 AI generated horses, and one AI generated flower. I pick the flower and “try again!” with a new set of images.
Yeah. Its better than reCaptcha - do I click those 3 pixels of the traffic signal it not!?! - but it’s still an obstacle that dimenishes the experience.
Wow, people will complain about literally anything. "I hate Google's Recaptcha" --> hCaptcha. "I hate hCaptcha" --> turnstile. Inevitably it'll be "I hate turnstile".
Same, I cant get to or log in to multiple sites with Firefox because of this.
It does seem to be able to work if I use a private window, though. So Im not exactly sure what’s causing the issue. Maybe something to do with cookies? But ive messed around with that and havent been able to get anywhere.
Thats a good idea. Ive tried doing it with certain other extensions (content blockers, user agents, script and tracker blockers/modifiers, etc.) disabled but something completely unrelated may be interfering.
Yeah, I’m pretty skeptical of the premise… it’s looking for browser “abnormalities”? I mean… there wasn’t a strong motivation to correct those abnormalities for bots when it didn’t matter. Now that it does, I just suspect they’ll correct those abnormalities.
Just because the abnormalities were present in the past doesn’t imply that it’s intrinsically more difficult to emulate browser behaviour than it is to defeat captchas. There just hasn’t been a reason to do so up until now.
I mean it's always going to be an uphill battle, but I'd rather it stop some bots and be easier for me than them making me do a million captchas, that dont even work half the time, that still don't stop many bots.
Nothing can stop 100% of bots. The goal with captchas like Turnstile is to use a significant portion of your resources to the point it’s expensive and slow to perform an attack.
Turnstile runs many background checks on your browser, so headless browsers automatically become futile.
JavaScript PoW challenges are performed that take up multiple seconds of execution time, memory and CPU. This alone is a deterrent because sequential attacks become extremely long to execute.
Concurrent attacks are still unfeasible because Turnstile ups the difficulty if it detects something is up, and receiving requests from thousands of botnet IPs is bound to trip an alarm.
I just tested my favourite cloudflare-blocked site and it still hangs on “verifying the security of your connection” in my figerprinting-resistant browser profile.
Yeah I get infinite loops on half the Internet. It’s infuriating and should be illegal for them to deny my as a customer just because they can’t track me
Second, we find that a few privacy-focused users often ask their browsers to go beyond standard practices to preserve their anonymity. This includes changing their user-agent (something bots will do to evade detection as well), and preventing third-party scripts from executing entirely. Issues caused by this behavior can now be displayed clearly in a Turnstile widget, so those users can immediately understand the issue and make a conscientious choice about whether they want to allow their browser to pass a challenge.
Those of you that browse the internet with JS disabled (e.g. using NoScript), the time of reckoning has finally come. A huge swatch of internet will no longer be accessible without enabling javascript.
As a web developer who’s worked in the industry for 16 years, every snowflake requiring me to work harder to support their “choices” is just an annoyance. I get wanting to reduce tracking etc, but in all honesty, the 0.0X% of users running tons of blockers and JS off are in reality just easier to track, in comparison to hiding in the mass of regular users who might be running an ad blocker (or nothing).
As long as your browser is making requests, you’ll never be invisible.
The change needs to come from regulation level imho.
It’s great you can do it and you’re free to, but not using javascript often means revamping the whole codebase and making everything 5x more complicated.
For Turnstile, the actual act of checking a box isn’t important, it’s the background data we’re analyzing while the box is checked that matters. We find and stop bots by running a series of in-browser tests, checking browser characteristics, native browser APIs, and asking the browser to pass lightweight tests (ex: proof-of-work tests, proof-of-space tests) to prove that it’s an actual browser.
But…. Lots of bots are made with RPAs …. With actual browsers , interface emulating human interaction. Sounds like a response to proton.me/blog/proton-captcha
blog.cloudflare.com
Hot