Edit: As of five minutes ago, I am aware that there is no official ui for the elgato stream deck which is a huge disservice to the users of this expensive piece of hardware. I was under the impression that streamdeck-ui was the official (and very outdated) official version, which is false. Those who bothered to explain this,...
I have voiced my frustration with Nvidia elsewhere, and there are still annoying bugs, that no one can fix because it’s proprietary, but I live through it because I love everything else on Linux.
I like that using Linux makes me more careful, I learn things all the time, I stopped feeling entitled to many things. The idea of a software is changed dramatically, it’s a code as an MVP, then built up slowly based on what the dev wants, as they often use the projects they maintain themselves, or contribution and requests from their users, which is so much more sustainable than what you get in the for-profit, proprietary domain.
For this reason, I avoid negatively talking about small community-driven foss projects on the internet, because I can be actively constructive by being the change I want to see, whereas on Windows I would be stuck with a subpar product with no source code to investigate.
I think in your case, and please don’t take it personal, we don’t know each other, I don’t know what you’re going through or how you are as a person, I’m making a small judgement based on word choice here, but calling something “trash”, even after you know it’s made by someone who probably wants to use their streamdeck on Linux (just like you) is quite a hostile thing to say and will invite some hostility back. I would retract it but you do you.
CaveatNow, some foss devs are whirling shit against each other all the time on their personal blogs but that’s within their own dev-dev domain, not user-dev domain. Say hypothetically if you start contributing to streamdeck-ui and the Dev starts being an AH then you can fork it and maybe then go online and vent.
Why forums appear toxic
I can admit some forums read very blunt and impatient, like BBS Arch Forum, but they only exercise patience when it’s a big they’ve never seen before, and will ask you to paste all sorts of command outputs to troubleshoot, otherwise, they are quick to recognise if the problem have been encountered, and will typically send you to the link of the solved post or tell you to RTFM because it’s somewhere in the manual. To be frank, this behavior I can understand and have no issue with.
On more general forums like this one, it’s often the case users ask question that they can do some precursor research and once stuck (no mention of issues anywhere, no similar thread), then if they do post, I’ve only seen poor quality comments like “works for me” or “same but idk the fix”, since the commenters will have to do the same searches the poster did and come to the same conclusions (here, it’s nice to send them links you researched to save them a few clicks). I’ve made similar posts with great details on what I tried but it’s still broken. If no one knows, then I open an issue.
Now, if they didn’t do research beforehand, commenters will look it up and then have to correct you but they might feel annoyed because why didn’t you do it before posting. Not everyone will be bothered by it. But I do feel like a search or two beforehand will bring a much more fruitful discussions.
I won’t defend inflammatory toxic language, but I don’t think there are any here present in this thread, it’s just a lot of “AcKtuaLLy” comments, but those were done to correct you and if I were you I don’t think I wouldn’t really get all defensive. I pride my time using Linux, and other commenters probably do too, but we all started somewhere so we know when we see the “I didn’t do my research but gonna post anyway” attitude. If one likes doing this, just get ready for people who want to correct things, I guess.
It’s a kind of tough love here where it’s heavily encouraged that one does their own heavy lifting instead of relying on others. At least that’s what I’ve observed. It might be negative, but it’s better being spoonfed. I’ve managed to avoid such negativity by trying to exhaust all options before posting for help. I learned so much and I hope you find a way to approach Linux that works for you! There are still others out there who aren’t jaded by newbie questions and still will help, just don’t expect their language to be nurturing.
Also, please consider ignoring internet points, they do nothing but makes you feel distressed. In places like this it’s a bit like on Hacker News where it’s to show if a comment is helpful/ constructive, or not. It’s not personal.
Yesterday, popular authors including John Grisham, Jonathan Franzen, George R.R. Martin, Jodi Picoult, and George Saunders joined the Authors Guild in suing OpenAI, alleging that training the company’s large language models (LLMs) used to power AI tools like ChatGPT on pirated versions of their books violates copyright laws...
Could this be due to the .ml and a few other domains thing again? The issuer is gradually pulling all of the free domains from people, while leaving the paid users active until their term is up.
This also works for any links on shields.io, but I didn’t notice the Lemmy & Mastodon links before. Please share anything else you discover while experimenting. I’ll edit in more tips as people find them...
I was wondering, with all the different Lemmy clients and frontends, what/which out of these do people actually use? To answer this, I made a poll if anyone wants to fill it out, and I tried to put every client I could find.
My only problem with lemmy is the lack of a client. Right now, for me, memmy and voyager are the least worst, but they’re pretty bad.
I’m not even asking for an Apollo out of the gate. I’d settle for an Alien Blue. And I have no problem paying for it, either. I was an Apollo ne plus ultra user, or whatever it was called.
From an information architecture perspective, it really seems like reddit and lemmy might be close enough that a middle layer could be written that would make it easier to port user-facing apps from one api to the other. There’s obviously be some differences, but if feasible it might accelerate development of multiple client options at once.
For the record, I’ve tried memmy, voyager, mlem, lemmios, liftoff, and I am presently going to try bean. That’s approximately the order of how useable I’ve found them, but they each have their own annoyances. I don’t own an android device, but I hope the options are better over there.
In any case, I became a heavy reddit user only after Alien Blue came out. I became a very heavy user after Apollo came out. I left reddit the day external clients became unavailable.
I think lemmy has enormous potential, but the UX needs to be made easier if we’re to really make a dent in reddit usership and get the level of posts, comments, and active communities they have there.
I also suspect that the backend is going to need a better approach for handling the negative consequences of a large influx of users. I’m not talking about load - I’m talking about community management. Back in the day (by which I mean the early 90s) there was an email blacklist. Admins of important nodes in the email distribution network had a shared list of domains that had unsecured servers, and would update it based on where they saw the (then relatively recent phenomenon of) spam coming from.
I’m really interested in how the information flow network of the fediverse evolves, if it continues to grow. Are we going to find a network with community structure, with clumps of mutually federated instances that have few if any connections between them? If so, clients will have to have solid account creation and management, and the admin tools will need to be sophisticated.
Statement on MGM Resorts International: Setting the record straight 9/14/2023, 7:46:49 PM We have made multiple attempts to reach out to MGM Resorts International, “MGM”. As reported, MGM shutdown computers inside their network as a response to us. We intend to set the record straight. No ransomware was deployed prior to the initial take down of their infrastructure by their internal teams. MGM made the hasty decision to shut down each and every one of their Okta Sync servers after learning that we had been lurking on their Okta Agent servers sniffing passwords of people whose passwords couldn’t be cracked from their domain controller hash dumps. Resulting in their Okta being completely locked out. Meanwhile we continued having super administrator privileges to their Okta, along with Global Administrator privileges to their Azure tenant. They made an attempt to evict us after discovering that we had access to their Okta environment, but things did not go according to plan. On Sunday night, MGM implemented conditional restrictions that barred all access to their Okta (MGMResorts.okta.com) environment due to inadequate administrative capabilities and weak incident response playbooks. Their network has been infiltrated since Friday. Due to their network engineers’ lack of understanding of how the network functions, network access was problematic on Saturday. They then made the decision to “take offline” seemingly important components of their infrastructure on Sunday. After waiting a day, we successfully launched ransomware attacks against more than 100 ESXi hypervisors in their environment on September 11th after trying to get in touch but failing. This was after they brought in external firms for assistance in containing the incident. In our MGM victim chat, a user suddenly surfaced a few hours after the ransomware was deployed. As they were not responding to our emails with the special link provided (In order to prevent other IT Personnel from reading the chats) we could not actively identify if the user in the victim chat was authorized by MGM Leadership to be present. We posted a link to download any and all exfiltrated materials up until September 12th, on September 13th in the same discussion. Since the individual in the conversation did not originate from the email but rather from the hypervisor note, as was already indicated, we were unable to confirm whether they had permission to be there. To guard against any unneeded data leaking, we added a password to the data link we provided them. Two passwords belonging to senior executives were combined to create the password. Which was clearly hinted to them with asterisks on the bulk of the password characters so that the authorized individuals would be able to view the files. The employee ids were also provided for the two users for identification purposes. The user has consistently been coming into the chat room every several hours, remaining for a few hours, and then leaving. About seven hours ago, we informed the chat user that if they do not respond by 11:59 PM Eastern Standard Time, we will post a statement. Even after the deadline passed, they continued to visit without responding. We are unsure if this activity is automated but would likely assume it is a human checking it. We are unable to reveal if PII information has been exfiltrated at this time. If we are unable to reach an agreement with MGM and we are able to establish that there is PII information contained in the exfiltrated data, we will take the first steps of notifying Troy Hunt from HaveIBeenPwned.com. He is free to disclose it in a responsible manner if he so chooses. We believe MGM will not agree to a deal with us. Simply observe their insider trading behavior. You believe that this company is concerned for your privacy and well-being while visiting one of their resorts? We are not sure about anyone else, but it is evident from this that no insiders have purchased any stock in the past 12 months, while 7 insiders have sold shares for a combined 33 MILLION dollars. (www.marketbeat.com/stocks/NYSE/…/insider-trades/ (www.marketbeat.com/stocks/NYSE/…/insider-trades/)). This corporation is riddled with greed, incompetence, and corruption. We recognize that MGM is mistreating the hotel’s customers and really regret that it has taken them five years to get their act together. Other lodging options, including casinos, are undoubtedly open and happy to assist you. At this point, we have no choice but to criticize VX Underground for falsely reporting events that never happened. We typically consider their information to be highly reliable and timely, but we did not attempt to tamper with MGM’s slot machines to spit out money because doing so would not be to our benefit and would decrease the chances of any sort of deal. The rumors about teenagers from the US and UK breaking into this organization are still just that—rumors. We are waiting for these ostensibly respected cybersecurity firms who continue to make this claim to start providing solid evidence to support it. Starting to the actors’ identities as they are so well-versed in them. The truth is that these specialists find it difficult to delineate between the actions of various threat groupings, therefore they have grouped them together. Two wrongs do not make a right, thus they chose to make false attribution claims and then leak them to the press when they are still unable to confirm attribution with high degrees of certainty after doing this. The tactics, procedures, and indicators of compromise (TTPs) used by the people they blame for the attacks are known to the public and are relatively easy for anyone to imitate. The ALPHV ransomware group has not before privately or publicly claimed responsibility for an attack before this point. Rumors were leaked from MGM Resorts International by unhappy employees or outside cybersecurity experts prior to this disclosure. Based on unverified disclosures, news outlets made the decision to falsely claim that we had claimed responsibility for the attack before we had. We still continue to have access to some of MGM’s infrastructure. If a deal is not reached, we shall carry out additional attacks. We continue to wait for MGM to grow a pair and reach out as they have clearly demonstrated that they know where to contact us.
Tech Crunch: neither you nor anybody else was contacted by the hacker who took control of MGM. Next time, verify your sources more thoroughly, or at the very least, give some hint that you do. Source: hxxp://alphvmmm27o3abo3r2mlmjrpdmzle3rykajqc5xsj7j7ejksbpsa36ad[.]onion/ddcdd476-fbd9-4809-baea-414d820c9d4b
Lemmy needs moderation tools and then a big domain name. Then it’ll be ready to go.
The single most important thing it needs is to send delete requests when a post is deleted from the original instance. (Either by a moderator or the user.)
And something to clean up unused files would be nice, but that can be kicked off manually for now.
I admire your enthusiasm, so I would like to chime in with my 2 cents. I see a solution to an undefined problem, thus we cannot evaluate if said problem is solved by the solution.
Were I to redesign Lemmy, I would start by defining the requirements of that software. Things to consider here would be:
What would be the total number of users?
What would be the total number of communities?
What would be the total number of instances?
What would be the spread of users across instances? Are there categories we can define? (For example, a large instance may have millions of users, but a small instance may have 1-1000)
The same about communities.
What would be the number of posts, comments, and upvotes/downvotes for each instance or community category?
What’s the average size of a post or comment?
Probably countless more, but you got to restrict yourself to the ones with the most impact.
Then, I would define operations like:
Creating a post, a comment, or upvoting/downvoting
Retrieving posts (ordered) for a community.
Retrieving comments (ordered) for a post.
Retrieving posts (ordered) for a specific feed (subscribed, local, all).
Reporting a user.
Banning a user.
Then, I would look deep into Lemmy’s architecture in order to understand the complexity of these operations (time, memory, and developer effort). My understanding is that Lemmy is using a database to store all data you subscribe to, including posts, comments, upvotes/downvotes and stats across time. With all the data in a database, most read operations become a SQL query. On the other hand, write operations are relayed using the ActivityPub protocol.
Here I would stop for a bit, and see how I can help Lemmy right now. What’s the most value I can offer with as little effort as possible, i.e. the lowest hanging fruit. For the time being, I believe that would be moderation, basic features are missing, and there are many moderation issues someone could help with ideation, testing or implementation. However, a deep dive in moderation domain logic may not be for everyone, nor does it have to be. There are plenty of performance issues to contribute to.
This experience would give you the context needed to design a better architecture for Lemmy.
Last but not least, I suggest starting small. Distributed systems are complex, even seasoned veterans have difficulty getting their heads around it. For example, counting becomes a problem with large enough data.
Sounds more like a simple filter to your feed. I’m not familiar with the backend side of Lemmy but I would guess it shouldn’t be too hard to implement.
Just save an array of instance domains a user doesn’t want to see in their preferences and filter them out of the post list that gets served to them.
A reported Free Download Manager supply chain attack redirected Linux users to a malicious Debian package repository that installed information-stealing malware....
I guess that’s to be expected since external images on blocked server wouldn’t load unless they’ve been cached on the one you’re viewing it on.
Similar thing happens to me with lemmy.zip since my connection or ISP blocks zip domains so images from users, posts, or communites on that server just don’t load at all unless I connect with Tor or a VPN.
I think we’re great where we’re at. Exponential growth for the sake of growth is not a good thing.
We have a decent and varied user base with plenty of subject matter experts. We tend to upvote more than downvote. The app still needs to grow, find more security and discoverability, but it’s pretty pleasant where we are now.
Those stats aren’t particularly distressing. We lost the .ml domains. Defederation happened. We had crapload of bot accounts created then dealt with.
If DAU doesn’t flatten out by november, it might be a bad sign. At the moment I’d say it’s more likely that the graphs have a bunch of hidden data in them and it’s not just a clean and clear indication that people are fleeing.
I’ll take 40,000, nice professional pleasant people over a million random redditors any day.
I’ll take a look at the nostr protocol, but I still think that people will naturally organize themselves into outsourcing “sort/rank/filter/block” functionality to someone else, whether that’s the provider of the service or a third-party plugin that leverages lots of users’ observations and behavior to train the model. In the end, plenty of us want the ability to block content we don’t want to see, rank content (including comments) by interestingness or usefulness or whatever criteria we prefer, whether that’s provided by the actual service or not.
After all, look at how we’ve created an ecosystem of ad blocking: we’ve whitelisted and blacklisted certain sites and domains, certain types of scripts, to where the user can control whether a website shows them ads. But it’s a cat and mouse game, and the software needs to be continually updated to be effective, so most of us just rely on a third-party-maintained browser extension or pihole config to do the ad blocking for us.
In other words, we still want to be able to censor things before they reach ourselves, but certain methods of doing that are more user-friendly, or more user-centered, or more user-configurable than others.
As the other user commented you will need to ensure you have the right ENV vars configured for your SMTP domain. 587 is the incoming port for the SMTP service and none of the containers will have it open and it doesn’t need to be open on your router since bitwarden will only send outgoing to SMTP.
Have you tested sending SMTP via CLI or any other service? You will need auth, and an endpoint and your email setup to receive via that method it should all work.
For example I use mailgun.org to send emails from my homelab to my gmail, you cannot send directly to your email address.
They don’t have to. If you don’t have database replicas that are actively trying to subvert the system, inject bogus transactions, etc. then you don’t have the set of failure domains for which blockchains are in theory useful for.
If you’re running backups for a single organization, you just need replicated data storage on servers owned and operated by that organization. If you’re running backups for a set of users who all trust your organization (e.g. if you’re Dropbox or the like), you also don’t need blockchain.
I just did this the other day. For figure c you’ll see sign in options. With that you have the option for domain join. Do that and it simply runs you through creating a local user. No domain join or MS account needed.
This was done on W11 Pro so your mileage may vary on W11Home
Similar to the previous campaign TAG reported on, North Korean threat actors used social media sites like X (formerly Twitter) to build rapport with their targets. In one case, they carried on a months-long conversation, attempting to collaborate with a security researcher on topics of mutual interest. After initial contact via...
In my own impression from the side of software engineering (i.e. the whole discipline rather than just “coding”) this kind of thing is pretty common:
Start with ad-hoc software development with lots of confusion, redundancy, inneficient “we’ll figure it out as when we get there” and so on.
To improve on this somebody really thinks things through and eventually a software development process emerges, something like Agile.
There are lots of good reasons for every part of this processes but naturally sometimes the conditions are not met and certain parts are not suitable for use: the whole process is not and can never be a one size fits all silver bullet because it’s way to complex and vast a discipline for that (if it wasn’t you wouldn’t need a process to do it with even the minimum of efficency).
However most people using it aren’t the “grand thinkers” of software engineering - software architect level types with tons of experience and who thus have seen quite a lot and know why certain elements of a process are as they are, and hence when to use them and when not to use them - and instead they’re run-of-the-mill, far more junior software designers and developers, as well as people from the management side of things trying to organise a tech-heavy process.
So you end up with what is an excellent process when used by people who know that each part tries to achieve, what’s the point of that and when is it actually applicable, being used by people who have no such experience and understanding of software development processes and just use it as one big recipe, blindly following it with no real understanding and hence often using it incorrectly.
For example, you see tons of situations where the short development cycles of Agile (aka sprints) and use cases are used without the crucial element which is actually envolving the end-users or stakeholders in the definition of the use cases, evaluation of results and even prioritization of what to do in the next sprint, so one of the crucial objectives of use cases - the discovery of the requirement details by interactive cycles with end-users where they quickly see some results and you use their feedback to fine-tune what gets done to match what they actually need (rather than the vague very high level idea they themselves have at the start of the project) is not at all achieve and instead they’re little more than small project milestones that in the old days would just be entries in Microsoft Manager or some tool like that.
This is IMHO the “problem” with any advanced systematic process in a complex domain: it’s excellent in the hands of those who have enough experience and understanding of concerns at all levels to use it but they’re generally either used by people without that experience (often because managers don’t even recognize the value of that experience until things unexpectedly blow up) or by actual managers whose experience might be vast but is actuallly in a parallel track that’s not really about dealing with the kinds of technical concerns that the process is designed to account for.
I’m not saying whatever you’re trying to put in my mouth.
In very very VERY simple terms: A software engineer with half the experience of somebody at a technical architecture level isn’t half as capable a technical architect- such a person is pretty much totally incapable in that domain.
Experience isn’t linear, it’s a sequence of unlocking and filling up of experience in domains which are linked but have separate concerns, with broader and broader scopes that go way beyond the mere coding, and this non-linerarity happens because it takes a while before people merelly become aware of the implications at the level at which they work of certain things outside their scope of work.
So if you’re not at the level of even being aware of how the end users of a software being developed themselves have very vague and extremelly incomplete ideas of what they need as software to help the in their own business process, then you can’t even begin to see not only what’s the point of certain practices around things like use cases, but even the entire need and suitability of Agile versus other development processes in a specific project and environment, so you’re not at all qualified to decide which parts of that to do and which not to do in the specific situation of your specific project, or even if Agile is the right choice.
People who don’t even know about the forms of requirements gathering in different environments can’t even begin to evaluate the suitability for their environment of a Process such as Agile which was designed mostly to address the “fast changing requirements” project situations, which are the product of various weakness in requirements gathering and/or fast changing business needs, which at the development side snowball into massive problems when long-development-cycle processes such as waterfall are used (for example when supposedly “done projects” do not produced something that matches stakeholder needs, hence end up having to be “fixed” so late in the process that it massivelly disrupts the software at a design and even architectural level, introducing massive weaknesses in the code base and code spaghettization, hence bugs and maintenability nightmares).
I see a lot of people mentioning that you should just switch to Firefox, but if you’re doing that because of privacy, you will not be off that much better by doing just that - unless you fiddle with the settings and get a custom user.js, such as this one, that properly hardens it and a few extenstions, such as Decentraleyes, Cookie Auto Delete or ClearURLs.
But it can get annoying, so instead I’d recommend giving LibreWolf a try. From my experience it works pretty much out of the box, and for the few settings that may be annoying to you they have a quick guide about how to disable them.
But even better than that, I’d recommend giving Mullvad Browser a try. It’s basically a clear-net version of Tor Browser, and so far I haven’t heard anything negative about them. I also really like their idea about pairing a VPN service (that’s optional) with a browser, so now you have exactly the same browser fingerprint as any other user using the same VPN (as long as you don’t add any extensions), which will make you more resistant even to the more advanced fingerprinting techniques, since there’s basically no way how to tell all of the users of the VPN apart. Some more info and reasoning, along with more recommendations, can be found at www.privacyguides.org/en/desktop-browsers/#mullva…
I’ve recently started using Mullvad, and was using LibreWolf as my daily browser, so now I’m switching between them randomly. I do run into issued from time to time, mostly because of 3rd party requests or auto-deleted cookies when leaving a domain, which can break some kind of cross-site flows. But whenever there’s an issue, I just quickly fire up Brave to do that one task. But all things considered it’s an amazing experience, so I do recommend giving some of them a try.
My point was more that creating a chatroom doesn’t create a community.
how would you define a “community”? And how big a deal is this effectively?
As far as I’m aware, communities (if defined as a list of rooms under a same namespace) are native to XMPP in the sense that MUCs can be namespaced at the domain level (e.g. “welcome@mycommunity.server.tld”), and then it’s up to clients to do something about it. I’ve seen some discussions going over jdev recently but there didn’t seem to be too much interest (even though clients have had a decades-long head-start to tease potential users).
IMO/IME, the “community” approach as found in discord & al. is rather detrimental and makes the relevant information hard to track because of excessive (per-server/community) rooms & notifications micromanagement. Decades old communities and projects have collaborated successfully on IRC over a single/couple of rooms and this doesn’t seem like a problem in practice.
More than the proliferation of rooms, I’m more interested in threading which is seeing a comeback as of late (e.g. in Cheogram), which is somewhat more comparable to zulip and “gentler”.
Edit: Is streamdeck on linux just trash? (discuss.tchncs.de)
Edit: As of five minutes ago, I am aware that there is no official ui for the elgato stream deck which is a huge disservice to the users of this expensive piece of hardware. I was under the impression that streamdeck-ui was the official (and very outdated) official version, which is false. Those who bothered to explain this,...
Grisham, Martin join authors suing OpenAI: “There is nothing fair about this” (arstechnica.com)
Yesterday, popular authors including John Grisham, Jonathan Franzen, George R.R. Martin, Jodi Picoult, and George Saunders joined the Authors Guild in suing OpenAI, alleging that training the company’s large language models (LLMs) used to power AI tools like ChatGPT on pirated versions of their books violates copyright laws...
It seems that we lost 200 Lemmy servers yesterday (lemmy.fediverse.observer)
From the Average Lemmy Servers Online by Day graph...
If you want pretty links, you can add badges when linking to Lemmy communities or Mastodon accounts
This also works for any links on shields.io, but I didn’t notice the Lemmy & Mastodon links before. Please share anything else you discover while experimenting. I’ll edit in more tips as people find them...
deleted_by_author
What Lemmy Client(s) Do You Use? (strawpoll.com)
I was wondering, with all the different Lemmy clients and frontends, what/which out of these do people actually use? To answer this, I made a poll if anyone wants to fill it out, and I tried to put every client I could find.
Two Vegas casinos fell victim to cyberattacks, shattering the image of impenetrable casino security (apnews.com)
Tech News online right now (lemmy.world)
deleted_by_author
New Architecture Design Proposal for Lemmy (feddit.de)
Hey Lemmy community!...
Hypothesis: Insufficient moderation tools lead to instance protectionism, which leads to a decline in the overall discussion quality on Lemmy (lemmy.world)
Free Download Manager site redirected Linux users to malware for years (www.bleepingcomputer.com)
cross-posted from: lemmy.ml/post/4810462...
Free Download Manager site redirected Linux users to malware for years (www.bleepingcomputer.com)
A reported Free Download Manager supply chain attack redirected Linux users to a malicious Debian package repository that installed information-stealing malware....
Hilariously, Hexbear is blocked in China. (sh.itjust.works)
deleted_by_author
SOLVED: Self Hosting Bitwarden, STMP Issues
Hi guy, I’ve managed to get bitwarden up and running in a docker instance as per the instructions provided by bitwarden here....
No we are not implementing blockchain to our backup systems (lemmy.world)
Windows 11 and local accounts (www.techrepublic.com)
Does this trick actually work?
Google is enabling Chrome real-time phishing protection for everyone (www.bleepingcomputer.com)
[…]...
Google security group discovers North Korean campaign targeting security researchers (blog.google)
Similar to the previous campaign TAG reported on, North Korean threat actors used social media sites like X (formerly Twitter) to build rapport with their targets. In one case, they carried on a months-long conversation, attempting to collaborate with a security researcher on topics of mutual interest. After initial contact via...
All of Japan's Toyota Assembly Plants Shut Down for a Day Because Their Server Ran Out of Disk Space (www.reuters.com)
Google Chrome pushes ahead with targeted ads based on your browser history (www.theregister.com)
Google enables advertisers a look into your browsing history…
A giant leap forwards for encryption with MLS (matrix.org)