But even when someone doesn’t vote, that doesn’t mean they aren’t Maga?
So if half of voters are Maga and 1/3 of eligible voters actually vote, it’s almost impossible to say something about the population as a whole. We can say about 16% would be the lower limit, but it could be a lot higher. If we take the voters as an unbiased large sample, we could extrapolate and say 50% of the population is actually Maga. But since voters are by definition a biased sample, it’s hard to say what the actual number would be. Especially with humans, that have complex interactions, like a certain persuasion could actually mean someone is less likely to vote. Or the other way around, where wanting to vote makes one persuasion more likely. This makes the whole thing pretty hard.
I think that in the future generative A.I. will be seen like the Turbo Button, Desktop Publishing Revolution and Information Superhighway of their day, ideas that over promised, under delivered and faded into obscurity. I suspect that Block Chain and Crypto Currencies will go the same way for similar reasons as outlined below.
Machine Learning is a useful tool to automatically generate a model for a multivariate system where traditional modelling is too complex or time consuming.
Generative models are attempting to take that to a whole new level but I don’t believe that it’s either sustainable nor living up to the hype generated by breathless reporting by ignorant journalists who cannot distinguish advanced technology from magic.
It’s not sustainable for a range of reasons. The most obvious is that the process universally disintegrates when it ingests content generated by the same process.
Furthermore, it doesn’t learn, specifically, the model doesn’t change until a new version is released, so it doesn’t gather new models whilst it’s being used.
And finally, it requires obscene amounts of energy to actually work and with the exponential growth of models, this is only going to get worse.
Source: I’m an ICT professional with 40 years experience
Isn’t your comment more of a perspective on the public perception of AI, the missteps surrounding its implementation, and its current role - rather than an examination of the potential role (practically speaking) of generative AI in a more general AI model? As is the thrust of the post, generative AI will necessarily be part of a larger AI, in part to make up for its weaknesses, in part to utilize its strengths.
That said, generative AI isn’t nearly as endangered by generated training data as is commonly understood. Even if it were that bad, embodiment is rapidly changing the landscape. There are a ton of papers about how to use larger models to make smaller models more effective, using generative AI to improve generative AI along with efficiencies. Heck, novel efficiencies get developed almost as regularly as novel use cases. We’re always learning how to do more with less.
I can’t imagine willingly going back to before Adobe added Generative Fill to Photoshop. Gen AI will certainly remain more than an academic curiosity, at least until they can be replaced with something better.
kbin.life
Oldest