Trick is I took out the actually useful parts like Chrome, Firefox, Edge, etc. And the OS. All the agents these days have AppleWebKit and Mozilla just so old websites that look for it don’t downgrade the experience.
Yeah this isn’t my UA but I’m just saying these parts are what’s considered the supported featureset rather than information about what software the device is running.
Yes, I get that point, but I also think that it’s tempting for the privacy-minded novice to think “the less information I provide, the better!”, while in actuality, it is better to provide “more” information: the most common UA, even if it means lying about your featureset. In this case, truly, more is less.
Oh gee, I wasn’t aware there was more to it than the UA. Thanks for opening my eyes.
Edit: I checked your link, most of the parameters on the test require client side execution. That (client side tracking) is absolutely unrelated to what (server side tracking) I was talking about, and is something you can control (by not allowing JavaScript, for example). Please do not confuse the two. There is literally nothing you can do against server side tracking.
Mine is Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0. Joke is, this is the trimmed version (about:config Xorigin and trimming settings) and some pages already have problems with it. If you strip out the OS part, pages like google.com won’t work anymore. Despite that you shouldn’t parse the UA string…
$5.5B with a yearly revenue of $4 million and a loss of $58 million. Even if they had $0 in expenses, it’d still take a little under 1400 years to earn the equivalent of their market cap.
Not completely though. A while ago I’ve had a wave of these comments on a 3 year old post of mine. They got deleted after I’ve reported them at least, though I don’t know if that action was done by a mod of the subreddit or site-wide admin.
Search results like this can drive people away from Google and toward other resources. Google likes money, and this is why they usually try to combat spammers that are gaming the system.
It’s a cat and mouse game that has been happening for years. Organic search spammers find a new thing, then Google tweaks the algorithm to downrank what they’re exploiting.
No doubt. That said, they do update the algo to combat this stuff. If you work in SEO you’re likely quite aware of what tricks currently work and no longer work.
Well you don’t have to read Cory’s newest column to understand that Google hasn’t been doing that, because they don’t have to. They do not care, at least not yet, because they have arguably become too big to care.
Don’t worry, I’m sure google will disable that soon in the same way they disable all the other search syntax that used to make searching a simple and easy task
Cool, pedant. Addend “on google” to my comment then if you need, since that’s clearly the context we’re talking about here. I’m aware there are other search engines, but context should have made what I was talking about pretty fucking obvious.
(Not OP) Point taken, but in that case the solution should also be obvious. Just use a different one that does provide that. If the product sucks, hit the bricks. DDG and Kagi are looking for market share, they’d love to have you.
This is how we found anything on reddit for most of its useful life. Its search was always garbage so we relied on Google to come up with usable results.
Nah. The best option we have imo is a service that indexes everything on one site so traditional search engines can find it. That requires someone to build it, and AFAIK that’s hasn’t happened.
That or the search engines themselves implement their own fediverse instances just for the purposes of indexing results. At a certain point if the platform becomes relevant enough I think we could see that happen.
I think they’d probably prefer instances that they have control over to reduce the avenues for a third party to manipulate the results. Otherwise they have to trust whoever runs the search instances.