There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

programmer_humor

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

aksdb , in Okay, which one of you Java devs did this

IDEA isn’t Java-only. Most of the other languages are available as plugins. IDEA is typically the go-to IDE for multilanguage projects.

brunofin ,

Except .NET then you can use Rider which is pretty much IDEA but with added support for .NET, which makes it… better…? Not sure.

aksdb ,

clion is also strictly separated.

ursakhiin ,

As it should be. The needs of a systems language are very different than the needs of a virtualized or interpreted one. I honestly don’t see how people use a single IDE for every language but I respect their choice to do it.

aksdb ,

I have a few projects where parts are Java, parts are Go and parts are C. Having that in a single workspace can be convenient.

ursakhiin ,

Even those I tend to open up in their specific IDEs when the time comes. It helps me separate the language but also the workflow.

gkd ,
@gkd@lemmy.ml avatar

Most of their products are like that. There are a lot of specific language support features in each one that may become available as plugins later on but not at the same pace or “fullness” as the specific product itself.

For example, PHPStorm has good JavaScript support but if you want really good Typescript support you should probably go with Webstorm.

Alternatively, I can totally write Rust code in Webstorm through the Rust plugin but I’m better off using CLion that has better support (or now RustRover which will be where all the latest Rust support features are added, although it’s still a preview product afaik).

Also worth noting though that there are indeed some “tiers”. Like Webstorm won’t support PHP but PHPStorm will support JavaScript/Typescript (again, not fully but enough to maintain a front end operating off your PHP backend)

CodingCarpenter ,

I prefer phpstorm for multi language personally.

feef ,

This. I use IntelliJ for Java, Kotlin, typescript, Python, HTML etc… it just does everything and does it better than other IDEs.

Lime66 ,

Or available in the paid version

adespoton , in Microsoft Edge could use a win

Well, I switched to Edge for work with the latest Chrome update (since internal apps were Chromium only), and was pleasantly surprised. It actually let me turn off almost all the junk, and is responsive in a way I haven’t seen in a Chromium browser in years.

Safari and Firefox for personal use though, and nothing compelling to make me change that.

Pyro ,

The performance is pretty on-par with other major browsers now, but it is the obscene amount of popups built into the browser that irritates me.

WoodenBleachers ,
@WoodenBleachers@lemmy.basedcount.com avatar

I use edge for work every day, what popups?

_MusicJunkie ,

Once you set it up it’s fine, but on first opening you have to click through a bunch of menus (no, I don’t want to share data, no I don’t want to sync my account, and so on). In other browsers it’s a small popup in the corner which you can ignore, and just google what you wanted to google. In edge they’re fullscreen and you have to click no on each one.

Probably a rather unique problem because I regularly set up new machines, most people just go through it once and never see it again.

stewie410 ,

May be worth building a default config to “install” for those setups; that’s saved me quite some time when configuring new/spare machines at work.

Pyro ,

You hit the nail right on its head! It’s pretty bad that there is no skip all option, and for some of them you have to manually uncheck before continuing.

I’m in the same situation as you where I often work on fresh virtual machines, and so I see this a lot too.

grff ,

Same I’m a developer who uses edge as my daily driver and once setup right I love it

WoodenBleachers ,
@WoodenBleachers@lemmy.basedcount.com avatar

I use arc on my mac and it’s nowhere near as nice as that, but I like the side tabs, the way it gets out of the way when I’m searching, and bing isn’t too bad; I’ve actually used it a few times. Once I found a customizable start page I haven’t looked back. Again, for work

nogooduser ,

There’s the shopping popup that tries to find better deals or vouchers for products you’re looking at. It’s easy to turn off though.

Searching the settings for “notification” does show others - a feature called Discover and sidebar apps seem to be able to send notifications but I’ve never seen either.

netchami ,

If you need to use Chromium, just use Ungoogled-Chromium or Brave. But Firefox/LibreWolf will always be superior.

odium ,

Same, I’m only allowed to use either Chrome or Edge on my work laptop, so I chose Edge.

Librewolf on my personal laptop and Firefox on mobile tho.

TheFriendlyArtificer ,

Bonus for Librewolf!

I love Firefox… But the listicle ads are seriously tacky and annoying. I do not want Pocket. And I do not want Pocket randomly re-enabled after a set of updates.

MaggiWuerze , (edited )

Maybe look at BromiteCromite? Open Source Chromium browser where you don’t need to disable anything

neme ,

Bromite has not been updated since January.

One of the old Bromite contributors forked it: github.com/uazo/cromite

MaggiWuerze ,

Yes, that one. Thanks for pointing it out

XpeeN ,

Mulch to

umbrella ,
@umbrella@lemmy.ml avatar

its based off of chromium, so one would expect it to be as fast as most modern browsers.

its the annoyances built on top of them, and user privacy that matters in a browser nowadays

magic_lobster_party , in Programming Languages as Essays

Unity: handing me over the essay is going to cost you extra.

Typescript: is this a declaration of war?

captain_aggravated ,
@captain_aggravated@sh.itjust.works avatar

GDScript: This is plagiarism. You can’t just write “extends essay2d.”

WorldieBoi , in Markdown everywhere

Code? .md files on GitHub

bananaw ,

I’ve been having trouble getting syntax highlighting to work on my ‘```’ fenced code blocks. I give it the right/supported language identifier, but nothing changes.

I’m using neovim with a bunch of lsp plugins and treesitter. Anyone have dotfiles with markdown code syntax highlighting working?

naught ,

Are u using Mason and LSPconfig?

edit: Oh, I don’t know that getting syntax highlighting in the blocks is something i’ve seen

ocelot ,

Have you installed the treeesitter grammars for those languages with :TSInstall language_name or in your treesitter config?

Slotos ,

This is pretty much all that’s needed. The language in the block is identified via a name that follows the opening triple backtick. E.g.:

python some carefully indented code

Haus ,
@Haus@kbin.social avatar

I'd go PostScript, since it's Turing-complete.

JokeDeity , in data secured

I keep seeing this sentiment from people who are supposedly savvy with computers. I never have to question where a file was saved to on Windows and I’m not sure why you guys do.

_cerpin_taxt_ ,

Right? Seems like Linux fanboy propaganda. If you don’t know where your file saves to, you’re probably incompetent and shouldn’t be near a computer. Even the most incompetent of users in my 15 year IT career know how to save something and where it’s saving to.

bernadetteee ,

You’d probably experience it if you were in a OneDrive/sharepoint/teams bla bla bla shop. The AutoSave defaults to On, the default destination is (I think?) the user home folder in OneDrive, and the default Save As does not pop up the system dialog, only your Recents. I feel this meme for sure and I’m a 25-year IT professional. It’s just poorly built user interaction, that someone in the bowels of M$oft thought would be “easier” but it took away most of the visibility and control from the user.

_cerpin_taxt_ ,

Eh that threw me off when it was new but it’s been a thing for about a decade at this point. My work is all-in on Azure and this has never confused any users as far as I’ve seen, and we’ve got some incredibly ignorant users. Everyone just hits “browse” from that screen and you’re back to the old school save screen.

GeneralEmergency ,

I’ve been seeing that a lot recently. And having been curious before, I never want to touch it.

JokeDeity ,

How do you know if a user is a bot?

Little8Lost ,

sometimes i am not sure when like paint that saved the filepath for the pic that was made a few months before. In that case i use save as again to look where it should have put my file and copy the path

JokeDeity ,

I’m having trouble understanding your sentence.

Little8Lost ,

MS paint saves stuff to the last given location.

When i save something without remembering the location i try to save my file again, so it gives me a explorer pop up so i see the location again

JokeDeity ,

Odd, I just tested this and clicking save brought up a window for me, it was not automatically to the last location, and I use the program at least once a month so it’s not my first time running it or anything.

wpuckering ,
@wpuckering@lm.williampuckering.com avatar

Same here, I’ve never had this problem, ever. I don’t even get how it’s possible to not know where your files are being saved if you are the least bit techsavvy.

abir_vandergriff ,

I’ve questioned it before when I just didn’t watch where it went, but it usually takes just a few seconds to figure it out most of the time.

Now Android on the other hand…

JokeDeity ,

Here fucking here. I never don’t have a hard time figuring out where a saved file went on my phone. And every app seems to have it’s own idea of where the best place to put downloaded files should be.

amio ,

Garden variety low effort meme. haha windows (or windass or windowns or whatever) bad so funiiii lolololololo etc - a few linuxmemes are basically... this.

Not sure what it does in programmer humor though - if you, as a programmer, find yourself in this situation... just git gud?

QuazarOmega ,

<span style="color:#323232;">git: 'gud' is not a git command. See 'git --help'.
</span>
JokeDeity ,

pip install good

Potatos_are_not_friends ,

It’s easy to call oneself tech savvy when they can Google a tutorial

Hogger85b ,

Yep it's just click top.toolbar see the breadcrumbs....it used to be a problem 15years ago and I still.question the name it uses when I open a file from outlook (why not downloads) but is pretty easy to find again

isosphere ,

Office is weird about it because of their OneDrive product

fidodo ,

In my experience it’s easiest to find things in Linux, next easiest in Windows, and on OSX, good luck with that.

zerofk ,

One of the very very very few good features of macOS: cmd-click the title bar of a document window to pop up a window with the document location.

It does not work on Microsoft’s products on macOS though.

worfamerryman ,

Windows seems to have irregular behavior in this regard. It usually defaults to the downloads folder. But sometimes it defaults to the last folder I saved a file to.

It might just be windows being buggy or something, but there were a number of time where I hit save and then the file is not where I expected it to be.

I could have prevented the mistake by paying attention first, but windows could also be consistent.

Droggelbecher ,

It’s not that we literally can’t find it, it’s just that it seems needlessly annoying on windows/ios/android after you get used to Linux

JokeDeity ,

What’s different for you? I’ve used Ubuntu and Raspbian before and it all seemed about the same as Windows to me.

AlexCory21 , in How programmers comment their code

I had a old job that told me that code is “self documenting” if you write it “good enough”. And that comments were unnecessary.

It always annoyed the heck out of me. Comments are imo more helpful than hurtful typically.

Is it just me? Or am I weird? Lol.

Andromxda OP ,
@Andromxda@lemmy.dbzer0.com avatar

I absolutely agree, and I too hate this stupid idea of “good code documenting itself” and “comments being unnecessary”.
I have a theory where this comes from. It was probably some manager, who has never written a single line of code, who thought that comments were a waste of time, and employees should instead focus on writing code. By telling them that “good code documents itself”, they could also just put the blame on their employees.
“Either you don’t need comments or your code sucks because it’s not self-documenting”
Managers are dumb, and they will never realize that spending a bit of time on writing useful comments may later actually save countless hours, when the project is taken over by a different team, or the people who initially created it, don’t work at the company anymore.

ChickenLadyLovesLife ,

I’ve never had a manager that was even aware of the comments vs. no comments issue. If I ever had, I would have just told them that a lack of comments makes the original coder harder to replace.

VonReposti ,

Code should always by itself document the “how” of the code, otherwise the code most likely isn’t good enough. Something the code can never do is explain the “why” of the code, something that a lot of programmers skip. If you ever find yourself explaining the “how” in the comments, maybe run through the code once more and see if something can be simplified or variables can get more descriptive names.

For me, that’s what was originally meant with self-documenting code. A shame lazy programmers hijacked the term in order to avoid writing any documentation.

ChickenLadyLovesLife ,

lazy programmers

I don’t think they’re lazy, I think they’re not good writers. Not being able to write well is very common among programmers (not having to communicate with written language is one reason a lot of people go into coding) and in my experience the Venn diagrams for “not a good writer” and “thinks comments are unnecessary” overlap perfectly.

Dropkick3038 ,

And isn’t it such a dangerous overlap! The coder whose writing (in their native language) is unclear, repetitive, convoluted, or hard to follow too often produces code with the same qualities. It’s even worse when the same coder believes “code is self-documenting” without considering why. Code self-documents with careful and deliberate effort, and in my experience, it is the really good writers who are most capable of expressing code in this way.

Daxtron2 ,

Its definitely a balance. Good code shouldn’t need much commenting, but sometimes you have to do something for a reason that isn’t immediately obvious and that’s when comments are most useful. If you’re just explaining what a snippet does instead of why you’re doing it that way, there’s probably more work to be done.

alonely0 ,

Document intentions and decisions, not code.

Amir ,
@Amir@lemmy.ml avatar

Code is not self documenting when decision trees are created based on some methodology that’s not extremely obvious

redxef ,

What a function does should be self evident. Why it does it might not be.

Vigge93 ,

Comment should describe “why?”, not “how?”, or “what?”, and only when the “why?” is not intuitive.

The problem with comments arise when you update the code but not the comments. This leads to incorrect comments, which might do more harm than no comments at all.

E.g. Good comment: “This workaround is due to a bug in xyz”

Bad comment: “Set variable x to value y”

Note: this only concerns code comments, docstrings are still a good idea, as long as they are maintained

balp ,

Docstring are user documentation, not comments. User documentation, with examples (tests), is always useful.

Vigge93 ,

As long as it’s maintained. Wrong documentation can often be worse than no documentation.

Ephera , (edited )

In my opinion, it strongly depends on what you’re coding.

Low-level code where you need to initialize array indices to represent certain flags? Absolutely comment the living shit out of that. → See response.
High-level code where you’re just plumbing different libraries? Hell no, the code is just as easily readable as a comment.

I do also think that, no matter where you lie in this spectrum, there is always merit to improving code to reduce the need for documentation:

  • Rather than typing out the specification, write a unit/integration test.
  • Rather than describing that a function should only be called in a certain way, make it impossible to do it wrongly by modelling this in your type system.
  • Rather than adding a comment to describe what a block of code does, pull it out into a separate function.
  • Rather than explaining how a snippet of code works, try to simplify it, so this becomes obvious.

The thing with documentation is that it merely makes it easier to learn about complexity, whereas a code improvement may eliminate this complexity or the need to know about it, because the compiler/test will remember.

This does not mean you should avoid comments like they’re actively bad. As many others said, particularly the “why” is not expressable in code. Sometimes, it is also genuinely not possible to clean up a snippet of code enough that it becomes digestable.
But it is still a good idea, when you feel the need to leave a comment that explains something else than the “why”, to consider for a moment, if there’s not some code improvement you should be doing instead.

Miaou ,

Hard disagree on your first point. Name the flags with descriptive name, move this initialisation to a function, and there you go, self-documented and clear code.

Ephera ,

Hmm, maybe my opinion is just shit in that regard. I don’t code terribly much low-level, so I’m probably overestimating the complexity and underestimating the options for cleaning things up.
That was kind of just a random example, I felt like there were many more cases where low-level code is complex, but I’m probably basing this off of shitty low-level code and forgetting that shitty high-level code isn’t exactly a rarity either.

AdNecrias ,

I’m with you but sometimes you don’t have the chance in low level. Max you can do is create local variables just so the bits you’re XORing are more obvious. And whenever you’re working with something where that’d be wasteful and the compiler doesn’t rid if it, you’re better off with comments (which you need to maintain, ugh)

Blackmist ,

Code is the what. Comments are the why.

Few things worse than an out of date comment.

AdNecrias ,

Good code is self documenting as in you don’t need to describe what it is doing and it is clear to read. Whoever says that and isn’t just repeating what they heard understands that whenever you are doing something not explicit in the code it should be on a comment.

Workarounds and explaining you need to use this structure instead of another for some reason are clear examples, but business hints are another useful comment. Or sectioning the process (though I prefer descriptive private functions or pragma regions for that).

It also addresses the hint that the code should be readable because you’re not going to have comments to explain spaghetti. Just a hint, doesn’t prevent it. Others also said it, comments are easier to get outdated as you don’t have the compiler to assist. And outdated comments lead to confusion.

humbletightband ,

I follow these simple rules and encourage my colleagues to do so

  1. If I’m just shuffling jsons, then yes, the code should be self documented. If it’s not, the code should be rewritten.
  2. If I implement some complex logic or algorithm, then the documentation should be written both to tests and in the code. Tests should be as dull as possible.
  3. If I write multithreading, the start, interruption, end, and shared variables should be clearly indicated by all means that I have: comment, documentation, code clearness. Tests should be repeated and waits should not be over 50ms.
perviouslyiner , (edited )

What they mean is that the variable names and function names are documentation.

For example changing “for( i in getList() )” to “for( patient in getTodaysAppointments() )” is giving the reader more information that might negate the need for a comment.

Dropkick3038 ,

I actually agree that “good enough” code can be self-documenting, but it isn’t always enough to achieve my goal which is to make the code understandable to my audience with minimal effort. With that goal in mind, I write my code as I would write a technical document. Consider the audience, linear prose, logical order, carefully selected words, things like that… In general, I treat comments as a sort of footnote, to provide additional context where helpful.

There are limits to self-documenting code, and interfaces are a good example. With interfaces, I use comments liberally because so many of the important details about the implementation are not obvious from the code: exactly how the implementation should behave, expected inputs and outputs under different scenarios, assumptions, semantic meaning, etc. Without this information, an implementation cannot be tested or verified.

homura1650 ,

Have you ever worked in a place where every function/field needed a comment? Most of those comments end up being “This is the <variable name>, or this does <method name>”. Beyond, being useless, those comments are counter productive. The amount of screen space they take up (even if greyed out by the IDE) significantly hurts legability.

Alexstarfire ,

And a good IDE let’s you hide it so… what is your point?

EpeeGnome ,

The issue with having mandatory useless comments is that any actually useful comments get lost in the noise.

Alexstarfire ,

I get what you’re saying. Perhaps I just haven’t had too many variables and such that have had such comments. VsCode shows the comments on hover when you’re in other parts of the code base. Which makes most any comment useful because something that is obvious in one part of the code isn’t immediately obvious in another. Though, that necessitates making comments that actually help you figure that out.

englislanguage ,

I have worked on larger older projects. The more comments you have, the larger the chance that code and comment diverge. Often, code is being changed/adapted/fixed, but the comments are not. If you read the comments then, your understanding of what the code does or should do gets wrong, leading you on a wrong path. This is why I prefer to have rather less comments. Most of the code is self a explanatory, if you properly name your variables, functions and whatever else you are working with.

englislanguage ,

One example for self documenting code is typing. If you use a language which enforces (or at least allows, as in Python 3.8+) strong typing and you use types pro actively, this is better than documentation, because it can be read and worked with by the compiler or interpreter. In contrast to documenting types, the compiler (or interpreter) will enforce that code meaning and type specification will not diverge. This includes explicitly marking parameters/arguments and return types as optional if they are.

I think no reasonable software developer should work without enforced type safety unless working with pure assembler languages. Any (higher) language which does not allow enforcing strong typing is terrible.

drspod , in Stop comparing programming languages

ITT: Rust programmers rewriting the joke in Rust.

makingStuffForFun , in we love open source!!1!
@makingStuffForFun@lemmy.ml avatar

Then …

Join discord

NateNate60 ,

Congratulations. You have successfully repeated the joke.

Prunebutt ,

To be fair: the cropping makes it hard to spot.

dabu ,
@dabu@lemmy.world avatar

I believe that was intended. It’s a way to “hide” the punchline on an image so it’s not obvious at the first glimpse

MostlyBlindGamer ,
@MostlyBlindGamer@rblind.com avatar

[sits quietly in the corner]

makingStuffForFun ,
@makingStuffForFun@lemmy.ml avatar

Congratulations. You have sharp observational skills.

kevincox , in Unused variables
@kevincox@lemmy.ml avatar

IDE is one thing, Go refuses to compile. Like calm down, I’m going to use it in a second. Just let me test the basics of my new method before I start using this variable.

Or every time you add or remove a printf it refuses to compile until you remove that unused import. Please just fuck off.

treechicken ,
@treechicken@lemmy.world avatar

VSCode with Go language support: removes unused variable on save “Fixed that compilation bug for ya, boss”

kevincox ,
@kevincox@lemmy.ml avatar

Like actually deletes them from the working copy? Or just removes them in the code sent to the compiler but they still appear in the editor?

FizzyOrange ,

Yeah IIRC it deletes them, which is as mad as you would expect. Maybe they’ve fixed that since I used it last which was some years ago.

jose1324 ,

Bruh that’s insane

FizzyOrange ,

Yeah I think it’s trauma due to C/C++'s awful warning system, where you need a gazillion warnings for all the flaws in the language but because there are a gazillion of them and some are quite noisy and false positives prone, it’s extremely common to ignore them. Even worse, even the deadly no-brainer ones (e.g. not returning something from a function that says it will) tend to be off by default, which means it is common to release code that triggers some warnings.

Finally C/C++ doesn’t have a good packaging story so you’ll pretty much always see warnings from third party code in your compilations, leading you to ignore warnings even more.

Based on that, it’s very easy to see why the Go people said “no warnings!”. An unused variable should definitely be at least a warning so they have no choice but to make it an error.

I think Rust has proven that it was the wrong decision though. When you have proper packaging support (as Go does), it’s trivial to suppress warnings in third party code, and so people don’t ignore warnings. Also it’s a modern language so you don’t need to warn for the mistakes the language made (like case fall through, octal literals) because hopefully you didn’t make any (or at least as many).

Rhaedas , (edited ) in "prompt engineering"

LLMs are just very complex and intricate mirrors of ourselves because they use our past ramblings to pull from for the best responses to a prompt. They only feel like they are intelligent because we can't see the inner workings like the IF/THEN statements of ELIZA, and yet many people still were convinced that was talking to them. Humans are wired to anthropomorphize, often to a fault.

I say that while also believing we may yet develop actual AGI of some sort, which will probably use LLMs as a database to pull from. And what is concerning is that even though LLMs are not "thinking" themselves, how we've dived head first ignoring the dangers of misuse and many flaws they have is telling on how we'll ignore avoiding problems in AI development, such as the misalignment problem that is basically been shelved by AI companies replaced by profits and being first.

HAL from 2001/2010 was a great lesson - it's not the AI...the humans were the monsters all along.

FaceDeer ,
@FaceDeer@fedia.io avatar

I wouldn't be surprised if someday when we've fully figured out how our own brains work we go "oh, is that all? I guess we just seem a lot more complicated than we actually are."

Rhaedas ,

If anything I think the development of actual AGI will come first and give us insight on why some organic mass can do what it does. I've seen many AI experts say that one reason they got into the field was to try and figure out the human brain indirectly. I've also seen one person (I can't recall the name) say we already have a form of rudimentary AGI existing now - corporations.

antonim ,

Something of the sort has already been claimed for language/linguistics, i.e. that LLMs can be used to understand human language production. One linguist wrote a pretty good reply to such claims, which can be summed up as “this is like inventing an airplane and using it to figure out how birds fly”. I mean, who knows, maybe that even could work, but it should be admitted that the approach appears extremely roundabout and very well might be utterly fruitless.

BigMikeInAustin ,

True.

That’s why consciousness is “magical,” still. If neurons ultra-basically do IF logic, how does that become consciousness?

And the same with memory. It can seem to boil down to one memory cell reacting to a specific input. So the idea is called “the grandmother cell.” Is there just 1 cell that holds the memory of your grandmother? If that one cell gets damaged/dies, do you lose memory of your grandmother?

And ultimately, if thinking is just IF logic, does that mean every decision and thought is predetermined and can be computed, given a big enough computer and the all the exact starting values?

huginn ,

You’re implying that physical characteristics are inherently deterministic while we know they’re not.

Your neurons are analog and noisy and sensitive to the tiny fluctuations of random atomic noise.

Beyond that: they don’t do “if” logic, it’s more like complex combinatorial arithmetics that simultaneously modify future outputs with every input.

BigMikeInAustin ,

Thanks for adding the extra info (not sarcasm)

huginn ,

Absolutely! It’s a common misconception about neurons that I see in programming circles all the time. Before my pivot into programming I was pre-med and a physiology TA - I’ve always been interested in neurochemistry and how the brain works.

So I try and keep up with the latest about the brain and our understanding of it. It’s fascinating.

FaceDeer ,
@FaceDeer@fedia.io avatar

Though I should point out that the virtual neurons in LLMs are also noisy and sensitive, and the noise they use ultimately comes from tiny fluctuations of random atomic noise too.

DrRatso ,

Physics and more to the point, QM, appears probabilistic but wether or not it is deterministic is still up for debate. Until such a time that we develop a full understanding of QM we can not say for sure. Personally I am inclined to think we will find deterministic explanations in QM, it feels like nonsense to say that things could have happened differently. Things happen the way they happen and if you would rewind time before an event, it should resolve the same way.

huginn ,

Fair - it’s not that we know it’s not: it’s that we don’t know that it is.

Probabilistic is equally likely as deterministic - we’ve found absolutely nothing disproving probabilistic models. We’ve only found reinforcement for those models.

It’s unintuitive to humans so of course we don’t want to believe it. It remains to be seen if it’s true.

DrRatso ,

Its worth mentioning that certain mainstream interpretations are also concretely deterministic. For example many worlds is actually a deterministic interpretation, the multiverse is deterministic, your particular branch simply appears probabilistic. Much more deterministic is Bohmian mechanics. Copenhagen interpretation, however, maintains randomness.

huginn ,

Sure but interpretations like pilot wave have more evidence against them than for them and while multiverse is deterministic it’s only technically so. It’s effectively probabilistic in that everything happens and therefore nothing is determined strictly by current state.

ricdeh ,
@ricdeh@lemmy.world avatar

Individual cells do not encode any memory. Thinking and memory stem from the great variety and combinational complexity of synaptic interlinks between neurons. Certain “circuit” paths are reinforced over time as they are used. The computation itself (thinking, recalling) then is “just” incredibly complex statistics over millions of synapses. And the most awesome thing is that all this happens through chemical reaction chains catalysed by an enormous variety of enzymes and other proteins, and through electrostatic interactions that primarily involve sodium ions!

DrRatso ,

Seth Anil has interesting lectures on consciousness, specifically on the predictive processing theory. Under this view the brain essentially simulates reality as a sort of prediction, this simulated model is what we, subjectively, then perceive as consciousness.

“Every good regulator of a system must be a model of that system“. In other words consciousness might exist because to regulate our bodies and execute different actions we must have an internal model of ourselves as well as ourselves in the world.

As for determinism - the idea of libertarian free will is not really seriously entertained by philosophy these days. The main question is if there is any inkling of free will to cling to (compatibilism), but, generally, it is more likely than not that our consciousness is deterministic.

BigMikeInAustin ,

Interesting about moving towards consciousness being deterministic.

(I haven’t been keeping up with that)

DrRatso ,

Its not that odd if you think about it. Everything else in this universe is deterministic. Well, quantum mechanics, as we observe it, is probabilistic, but still governed by rules and calculable, thus predictable (I also believe it is, in some sense, deterministic). For there to be free will, we need some form of “special sauce”, yet to be uncovered, that would grant us the freedom and agency to act outside of these laws.

skyspydude1 ,

This had an interesting part in Westworld, where at one point they go to a big database of minds that have been “backed up” in a sense, and they’re fairly simple “code books” that define basically all of the behaviors of a person. The first couple seasons have some really cool ideas on how consciousness is formed, even if the later seasons kind of fell apart IMO

GregorGizeh ,

It isnt so much “we" as in humanity, it is a select few very ambitious and very reckless corpos who are pushing for this, to the detriment of the rest (surprise).

If “we” were able to reign in our capitalists we could develop the technology much more ethically and in compliance with the public good. But no, we leave the field to corpos with delusions of grandeur (does anyone remember the short spat within the openai leadership? Altman got thrown out for recklessness, investors and some employees complained, he came back and the whole more considerate and careful wing of the project got ousted).

frezik ,

I find that a lot of the reasons people put up for saying “LLMs are not intelligent” are wishy-washy, vague, untestable nonsense. It’s rarely something where we can put a human and ChatGPT together in a double-blind test and have the results clearly show that one meets the definition and the other does not. Now, I don’t think we’ve actually achieved AGI, but more for general Occam’s Razor reasons than something more concrete; it seems unlikely that we’ve achieved something so remarkable while understanding it so little.

I recently saw this video lecture by a neuroscientist, Professor Anil Seth:

royalsociety.org/…/faraday-prize-lecture/

He argues that our language is leading us astray. Intelligence and consciousness are not the same thing, but the way we talk about them with AI tends to conflate the two. He gives examples of where our consciousness leads us astray, such as seeing faces in clouds. Our consciousness seems to really like pulling faces out of false patterns. Hallucinations would be the times when the error correcting mechanisms of our consciousness go completely wrong. You don’t only see faces in random objects, but also start seeing unicorns and rainbows on everything.

So when you say that people were convinced that ELIZA was an actual psychologist who understood their problems, that might be another example of our own consciousness giving the wrong impression.

vcmj ,

Personally my threshold for intelligence versus consciousness is determinism(not in the physics sense… That’s a whole other kettle of fish). Id consider all “thinking things” as machines, but if a machine responds to input in always the same way, then it is non-sentient, where if it incurs an irreversible change on receiving any input that can affect it’s future responses, then it has potential for sentience. LLMs can do continuous learning for sure which may give the impression of sentience(whispers which we are longing to find and want to believe, as you say), but the actual machine you interact with is frozen, hence it is purely an artifact of sentience. I consider books and other works in the same category.

I’m still working on this definition, again just a personal viewpoint.

hemko ,

How do you know you’re conscious?

Odinkirk ,
@Odinkirk@lemmygrad.ml avatar

Let’s not put Descartes before the horse.

vcmj ,

I read this question a couple times, initially assuming bad faith, even considered ignoring it. The ability to change, would be my answer. I don’t know what you actually mean.

hemko ,

deleted_by_author

  • Loading...
  • Munrock ,
    @Munrock@lemmygrad.ml avatar

    Conscious and Conscience are different things (but understandably easy to conflate)

    root_beer ,

    Conscience and consciousness are not the same thing

    vcmj ,

    I do think we’re machines, I said so previously, I don’t think there is much more to it than physical attributes, but those attributes let us have this discussion. Remarkable in its own right, I don’t see why it needs to be more, but again, all personal opinion.

    Potatos_are_not_friends ,

    All my programming shit posts ruining future developers using AI

    https://lemmy.world/pictrs/image/48a58c2e-acb4-4bf1-a880-b57e99635607.gif

    Hazzard ,

    I don’t necessarily disagree that we may figure out AGI, and even that LLM research may help us get there, but frankly, I don’t think an LLM will actually be any part of an AGI system.

    Because fundamentally it doesn’t understand the words it’s writing. The more I play with and learn about it, the more it feels like a glorified autocomplete/autocorrect. I suspect issues like hallucination and “Waluigis” or “jailbreaks” are fundamental issues for a language model trying to complete a story, compared to an actual intelligence with a purpose.

    MonkderDritte ,

    LLMs are just very complex and intricate mirrors of ourselves because they use our past ramblings to pull from for the best responses to a prompt. They only feel like they are intelligent because we can’t see the inner workings

    Almost like children.

    FaceDeer ,
    @FaceDeer@fedia.io avatar

    Or, frankly, adults.

    PiratePanPan , in My wife was unimpressed by Vim
    @PiratePanPan@lemmy.dbzer0.com avatar

    > my wife

    > vim user

    fake

    Syringe ,

    At first, I was mad. Then the slow, sad realization that you’re more right than not…

    billwashere ,

    Hey I’m married and use Vim. I feel attacked 😅

    PriorityMotif ,
    @PriorityMotif@lemmy.world avatar

    You must be very good at masking.

    billwashere ,

    I was just playing BOTW looking for a mask does that count?

    PriorityMotif ,
    @PriorityMotif@lemmy.world avatar

    Subject doesn’t understand social cues

    moitoi ,
    @moitoi@lemmy.dbzer0.com avatar

    Great to see this type of humor poping out.

    blotz , in Google cosplay is not business-critical
    @blotz@lemmy.world avatar

    xD just blocked the spammer and all his comments disappeared. Imagine working so hard to spam and it takes 2s to for someone to hide your posts.

    abbadon420 ,

    2 clicks, reload the thread and it’s gone. Easy peasy!

    slazer2au ,

    I thought my client was chucking a wobbly with so many removed comments be the same person.

    quicksand ,

    Oop looks like he moved servers. I blocked them on one yesterday and just saw their post again. Oh well, another 2s wasted :p

    BakedCatboy ,

    Lemmy really needs pixelfed’s naive bayes spam detection, it would be able to easily classify the new accounts after classifying one post as spam, then it would be 0 seconds wasted.

    quicksand ,

    I know some of those words and agree that that would be better

    LostXOR ,

    What's even up with that guy? What's he trying to accomplish? Spammers confuse me.

    kakes ,

    Some bored kid, I would assume.

    Lemminary ,

    I feel like they were banned or something and decided to go scorched Earth on Lemmy

    RemiliaScarlet ,
    @RemiliaScarlet@eviltoast.org avatar

    Sneed is light. Jannies are darkness.

    The janny is the accursed one, fit only to consume feces.

    jbk ,

    Couldn’t a bot just automate that easily? Especially with how open Lemmy’s API probably is

    Ephera , in Hey, I'm new to GitHub!

    It ain’t called git-hub for nothing. The social network for gits. How else are they supposed to behave?

    BradleyUffner ,

    I’m pretty sure this is aimed at websites that have a “download” or “get x now” link on their website that just takes you to a git hub page with no obvious download section. It isn’t uncommon, and it can be frustrating. At the very least, it’s a bad user experience.

    Comradesexual ,
    @Comradesexual@lemmygrad.ml avatar

    It is really shit and hard to find for many projects.

    Templa ,

    The medium internet user doesn’t even know what git is, so I think it is very likely that a lot of people don’t understand the way github works and are very upset by how “difficult” it can be to get an installer from it.

    bleistift2 , (edited ) in I had to design a simple general purpose language for university, so I tried creating "ZoomerScript" with Jetbrains MPS
    
    <span style="font-weight:bold;color:#a71d5d;">class </span><span style="color:#0086b3;">Scratch </span><span style="color:#323232;">{
    </span><span style="color:#323232;">  </span><span style="font-style:italic;color:#969896;">// Start of file
    </span><span style="color:#323232;">
    </span><span style="color:#323232;">  </span><span style="font-weight:bold;color:#a71d5d;">public static void </span><span style="font-weight:bold;color:#795da3;">main</span><span style="color:#323232;">(args</span><span style="background-color:#f5f5f5;font-weight:bold;color:#b52a1d;">:</span><span style="color:#323232;"> </span><span style="color:#0086b3;">string</span><span style="font-weight:bold;color:#a71d5d;">[]</span><span style="color:#323232;">) {
    </span><span style="color:#323232;">    </span><span style="font-weight:bold;color:#a71d5d;">int</span><span style="color:#323232;"> number1 </span><span style="font-weight:bold;color:#a71d5d;">= </span><span style="color:#0086b3;">2</span><span style="color:#323232;">;
    </span><span style="color:#323232;">    number </span><span style="color:#0086b3;">1 </span><span style="font-weight:bold;color:#a71d5d;">= </span><span style="color:#0086b3;">10</span><span style="color:#323232;">;
    </span><span style="color:#323232;">    </span><span style="font-weight:bold;color:#a71d5d;">int</span><span style="color:#323232;"> number2 </span><span style="font-weight:bold;color:#a71d5d;">= </span><span style="color:#0086b3;">13</span><span style="color:#323232;">;
    </span><span style="color:#323232;">    </span><span style="font-weight:bold;color:#a71d5d;">boolean</span><span style="color:#323232;"> fo_sure </span><span style="font-weight:bold;color:#a71d5d;">= </span><span style="color:#0086b3;">true</span><span style="color:#323232;">;
    </span><span style="color:#323232;">  
    </span><span style="color:#323232;">    </span><span style="font-weight:bold;color:#a71d5d;">if </span><span style="color:#323232;">(fo_sure) {
    </span><span style="color:#323232;">      number1 </span><span style="font-weight:bold;color:#a71d5d;">=</span><span style="color:#323232;"> number1 </span><span style="font-weight:bold;color:#a71d5d;">+ </span><span style="color:#0086b3;">5 </span><span style="font-weight:bold;color:#a71d5d;">- </span><span style="color:#0086b3;">10 </span><span style="font-weight:bold;color:#a71d5d;">* </span><span style="color:#0086b3;">2 </span><span style="font-weight:bold;color:#a71d5d;">/ </span><span style="color:#0086b3;">3</span><span style="color:#323232;">;
    </span><span style="color:#323232;">    }
    </span><span style="color:#323232;">  
    </span><span style="color:#323232;">    </span><span style="color:#0086b3;">System</span><span style="color:#323232;">.out.println(number1);
    </span><span style="color:#323232;">  
    </span><span style="color:#323232;">    </span><span style="font-weight:bold;color:#a71d5d;">boolean</span><span style="color:#323232;"> canYouSeeMee </span><span style="font-weight:bold;color:#a71d5d;">= </span><span style="color:#0086b3;">false</span><span style="color:#323232;">;
    </span><span style="color:#323232;">    </span><span style="color:#0086b3;">System</span><span style="color:#323232;">.out.println(canYouSeeMe);
    </span><span style="color:#323232;">    </span><span style="font-weight:bold;color:#a71d5d;">if </span><span style="color:#323232;">(</span><span style="color:#0086b3;">false</span><span style="color:#323232;">) {
    </span><span style="color:#323232;">      canYouSeeMe </span><span style="font-weight:bold;color:#a71d5d;">= </span><span style="color:#0086b3;">false</span><span style="color:#323232;">;
    </span><span style="color:#323232;">    } </span><span style="font-weight:bold;color:#a71d5d;">else </span><span style="color:#323232;">{
    </span><span style="color:#323232;">      canYouSeeMe </span><span style="font-weight:bold;color:#a71d5d;">= </span><span style="color:#0086b3;">true</span><span style="color:#323232;">;
    </span><span style="color:#323232;">    }
    </span><span style="color:#323232;">    </span><span style="color:#0086b3;">System</span><span style="color:#323232;">.out.println(canYouSeeMe);
    </span><span style="color:#323232;">  } 
    </span><span style="color:#323232;">}
    </span>
    

    What’d I win?

    I find it interesting and unnerving that I understood the code, but not the youthspeak.

    prof OP ,
    @prof@infosec.pub avatar

    Well done, here’s your price: 🏅

    You may redeem it for a star on a GitHub repo of your choice.

    It all gets put into the main method though in this version 😄

    RedditWanderer ,

    canYouSeeMe = !canYouSeeMe;

    bleistift2 ,

    There are even more optimization possibilities, but I wanted to stay as close to the original as possible.

    Buddahriffic ,

    Yeah, it can be optimized down to a single constant print statement.

    steersman2484 ,

    Isn’t the second if condition false?

    bleistift2 ,

    Thanks, I corrected it.

    prettybunnys , in Bug Thread

    I’m actually part of a email chain that randomly got created because of a bug on GitHub that created an issue out of nowhere.

    Every year for the past decade or so folks pop up, say hi, talk about life, etc.

    We’ve celebrated birthdays, graduations, marriages and births and talked about job losses and even death of loved ones.

    Thanks random GitHub bug.

    Feathercrown ,
    NewAgeOldPerson ,

    I thought I had read every xkcd there is. That was beautiful!

    Rodeo ,

    Damn that last panel hits hard lol

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines