There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

tal , (edited )
@tal@lemmy.today avatar

I mean, most complex weapons systems have been some level of robot for quite a while. Aircraft are fly-by-wire, you have cruise missiles, CIWS systems operating in autonomous mode pick out targets, ships navigate, etc.

I don’t expect that that genie will ever go back in the bottle. To do it, you’d need an arms control treaty, and there’d be a number of problems with that:

  • Verification is extremely difficult, especially with weapons that are optionally-autonomous. FCAS, for example, the fighter that several countries in Europe are working on, is optionally-manned. You can’t physically tell by just looking at such aircraft whether it’s going to be flown by a person or have an autonomous computer do so. If you think about the Washington Naval Treaty, Japan managed to build treaty-violating warships secretly. Warships are very large, hard to disguise, can be easily distinguished externally, and can only be built and stored in a very few locations. I have a hard time seeing how one would manage verification with autonomy.
  • It will very probably affect the balance of power. Generally-speaking, arms control treaties that alter the balance of power aren’t going to work, because the party disadvantaged is not likely to agree to it.

I’d also add that I’m not especially concerned about autonomy specifically in weapons systems.

It sounds like your concern, based on your follow-up comment, is that something like Skynet might show up – the computer network in the Terminator movie series that turn on humans. The kind of capability you’re dealing with isn’t on that level. I can imagine one day, general AI being an issue in that role – though I’m not sure that it’s the main concern I’d have, would guess that dependence and then an unexpected failure might be a larger issue. But in any event, I don’t think that it has much to do with military issues – I mean, in a scenario where you truly had an uncontrolled, more-intelligent-than-humans artificial intelligence running amok on something like the Internet, it isn’t going to matter much whether-or-not you’ve plugged it into weapons, because anything that can realistically fight humanity can probably manage to get control of or produce weapons anyway. Like, this is an issue with the development of advanced artificial intelligence, but it’s not really a weapons or military issue. If we succeed in building something more-intelligent than we are, then we will fundamentally face the problem of controlling it and making something smarter than us do what we want, which is kind of a complicated problem.

The term coined by Yudkowsky for this problem is “friendly AI”:

en.wikipedia.org/…/Friendly_artificial_intelligen…

Friendly artificial intelligence (also friendly AI or FAI) is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

It’s not an easy problem, and I think that it’s worth discussion. I just think that it’s mostly unrelated to the matter of making weapons autonomous.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines