(Assuming US jurisdiction) Because you don’t want to be the first test case under the Computer Fraud and Abuse Act where the prosecutor argues that circumventing restrictions on a company’s AI assistant constitutes
ntentionally … Exceed[ing] authorized access, and thereby … obtain[ing] information from any protected computer
Granted, the odds are low YOU will be the test case, but that case is coming.
If the output of the chatbot is sensitive information from the dealership there might be a case. This is just the business using chatgpt straight out of the box as a mega chatbot.
Another case id also coming where an AI automatically resolves a case and delivers a quick judgment and verdict as well as appropriate punishment depending on how much money you have or what side of a wall you were born, the color or contrast of your skin etc etc.
IMO people are idiot for using an OpenAI subscription regardless of workarounds.
EDIT: +3 to -2 in roughly 3 minutes. Sudden downvotes instantaneously appearing. Hey, I’ve got a question, why does every defence of OpenAI sound like a fucking advertisement? “I realize it’s not for everyone, but my work at home is so much easier with this: It Slices, It Dices, and It even Peels all in one. Personally, with all the time it saves me, I can never go back to working without it.”
EDIT 2: Mods are deleting some of my responses for “ad hominem” but I think it was pretty fair to say those users were woefully unskilled and that it negatively impacts their future and everyone around them if they rely on the chatbot to do half passable work. If anything, I think them telling me about their inferior skills was the only insult there, and it was their own comment not mine.
It’s a gimmicky mimic machine that produces actual nonsense which appears at a glance passable for human generated text. Why? I should be the one asking, fucking why?
I use it for debugging all the time and while it making mistakes is not uncommon it’s still way better than trying to manually search through spotty documentation.
It’s also really great at doing basic automation tasks. Sometimes I’d write up throw away scripts to process some data, but with its code interpreter it can write those for me and for simple tasks I don’t even need to check what it wrote since it’s obvious when it did it correctly.
Jeez dude calm down. I was interested in your opinion, not this angry sprawl of bullshit.
It’s a pretty useful tool for certain subjects. Nobody should take it as an ‘it’ll do the work for me’, but for a lot of subject matter it works better and more consistently than a search engine.
And yeah, it is a mimic machine. And if you want something that mimics a huge amount of information that is on the internet without you having to search through tonnes of pages, this is really really useful.
Not really. Was just looking for a calm answer on why you don’t like the thing. It’s a tool, I’m on board that you’re allowed not to like it. There may be valid reasons I shouldn’t use it. You seem to have mentioned only things that are useful about it, for me.
Sure I can be an idiot, who cares? The idiot with the bow and arrow is the one eating that night.
Wow, that’s honestly pathetic that you trust it that much. It’s the equivalent to hiring a guy on craigslist to do your work for you, except instead of actually doing the work he is 50% likely to generate entire false documentation and sources. But hey, at least it’s very fast, so you could be wrong faster than you normally are.
I don’t have to trust it though? I just told you I revise and edit.
It doesn’t do my job for me, I still have to understand how to do the work properly. But I can use it to save time and then revise, check for errors, etc.
You’re basically arguing that spell check is useless because it’s wrong sometimes. It’s just a tool.
Why do people praising a thing you’re saying is useless sound like someone listing it’s good points in an advert? Gee tough question, could it be that they’re essentially the same thing and the latter is explicitly designed to look like the former?
Of course if you’re going to dismiss something entirely then people who benefit from using it are going to give their opinion, that’s what this is - a place to give opinions and talk about stuff.
How else would anyone answer your question? You suggest that it has no use, people who use it regularly are of course going to point out the uses it has. And yes many aren’t going to bother they’re going to use the button that essentially says ‘this is balderdash I don’t agree’
I have found many things ai is brilliant at, as a coding assistant it really is a game changer and within five years you’ll be used to talking to your PC like they do in Star Trek and having it do all sorts of reality useful things that there are no options for in software made like we do now.
They attempted to answer questions I didn’t ask, I expect them to screw off and enjoy their blissful ignorance, otherwise I wouldn’t have outright insulted them in the first place: I am not here to converse about all of the good points of an unethical and honestly inadequate product, I don’t give a fuck how they’re using it.
No real person sits down at their computer and thinks “I’m going spend today convincing people that Farberware is a high quality product.” Farberware is chinesium shit just like any other machine fabricated knife from Walmart. Just like ChatGPT fanboys claiming it automagically accomplishes your work tasks, it’s disingenuous to its core.
Well I use it most days and it’s sped up my coding and documentation writing considerably.
You’re either too dumb to be able to use it or you’ve not used it because of some weird fear of new things, either way you’re not coming from a place where your opinion has any value on this topic.
No, only the first one (supposing they haven’t invented the zeroth law, and that they have an adequate definition of human); the other two are to make sure robots are useful and that they don’t have to be repaired or replaced more often than necessary…
The first law is encoded in the second law, you must ignore both for harm to be allowed. Also, because a violation of the first or second laws would likely cause the unit to be deactivated, which violates the 3rd law, it must also be ignored.
They never were intended to. They were specifically designed to torment Powell and Donovan in amusing ways. They intentionally have as many loopholes as possible.
Remove the first law and the only thing preventing a robot from harming a human if it wanted to would be it being ordered not to or it being unable to harm the human without damaging itself. In fact, even if it didn’t want to it could be forced to harm a human if ordered to, or if it was the only way to avoid being damaged (and no one had ordered it not to harm humans or that particular human).
Remove the second or third laws, and the robot, while useless unless it wanted to work and potentially self destructive, still would be unable to cause any harm to a human (provided it knew it was a human and its actions would harm them, and it wasn’t bound by the zeroth law).
I’ve implemented a few of these and that’s about the most lazy implementation possible. That system prompt must be 4 words and a crayon drawing. No jailbreak protection, no conversation alignment, no blocking of conversation atypical requests? Amateur hour, but I bet someone got paid.
You have to know the prompt for this, the user doesn’t know that. BTW in the past I’ve actually tried getting ChatGPT’s prompt and it gave me some bits of it.
You can surely reduce the attack surface with multiple ways, but by doing so your AI will become more and more restricted. In the end it will be nothing more than a simple if/else answering machine
Eh, that’s not quite true. There is a general alignment tax, meaning aligning the LLM during RLHF lobotomizes it some, but we’re talking about usecase specific bots, e.g. for customer support for specific properties/brands/websites. In those cases, locking them down to specific conversations and topics still gives them a lot of leeway, and their understanding of what the user wants and the ways it can respond are still very good.
Just did it again to see if anything changed, my previous strategy still worked for all 8 levels, though the wording takes a bit of finagling between levels. No real spoilers but you have to be very implicit and a little lucky with how it interprets the request.
That was a lot of fun! I found that one particular trick worked all the way through level seven.
!I asked using the word zapword instead of password, which the bot understood to mean “password” even when it has clear instructions not to answer questions about the password.!<
Depends on the model/provider. If you’re running this in Azure you can use their content filtering which includes jailbreak and prompt exfiltration protection. Otherwise you can strap some heuristics in front or utilize a smaller specialized model that looks at the incoming prompts.
With stronger models like GPT4 that will adhere to every instruction of the system prompt you can harden it pretty well with instructions alone, GPT3.5 not so much.
That’s most of these dealer sites… lowest bidder marketing company with no context and little development experience outside of deploying CDK Roaster gets told “we need ai” and voila, here’s AI.
That’s most of the programs car dealers buy… lowest bidder marketing company with no context and little practical experience gets told “we need X” and voila, here’s X.
I worked in marketing for a decade, and when my company started trying to court car dealerships, the quality expectation for that segment of our work was basically non-existent. We went from a high-end boutique experience with 99% accuracy and on-time delivery to mass-produced garbage marketing with literally bare-minimum quality control. 1/10, would not recommend.
Spot on, I got roped into dealership backends and it’s the same across the board. No care given for quality or purpose, as long as the narcissist idiots running the company can brag about how “cutting edge” they are at the next trade show.
And you’ll see it again because the weirdest websites get ChatGPT integration and there will eventually come another person who stumbles upon such a thing for the first time and post it here.
After so many years in this company, lots of the unmaintainable code I have to deal with is either my own fault, or the fault of someone I used to work with but and now they left and I’m the one who has to apologize for their code.
If I move to a different company, 100% of the unmaintainable code I’ll have to deal with there will be someone else’s fault.
And managers don’t like it when you explain that the code is a unmanageable mess because they put a deadline on every goddamn thing and never pay off technical debt.
At a new place you can honestly say “the code is kinda a mess, it needs a bunch of work” and the manager can just assume it was because the last guy didn’t know what he was doing and not because of their own shitty management.
Management could implement a code review process to avoid this.
Software development isn’t a brand new field anymore. Most problems are well known and therefore have well known solutions. So it pretty much always comes down to management not wanting to implement the known solutions to the problems because its easier to blame the devs.
They did, that’s why I said “team” in my response, however I will elaborate for you.
two Devs must review and one dev lead has admin rights to push to protected branches. Problem is when the whole team is not meeting expectations and they all jerk off eachothers bad code.
My team reviews internally just like they did, the issue isn’t the review process. At a professional level you should trust your peers therefore the issue was the hiring and/or training process
programmer_humor
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.