There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

codeconfessions.substack.com

InvertedParallax , to technology in LZ77 Is All You Need? Why Gzip + KNN Works for Text Classification

I mean, it is tokenizing in a way, by generating dictionary entries and sequences.

Wouldn’t say it’s all we need.

DefinitelyNotBirds , to technology in Understanding Immortal Objects in Python 3.12: A Deep Dive into Python Internals
@DefinitelyNotBirds@lemmy.world avatar

deleted_by_author

  • Loading...
  • fartsparkles , (edited )

    Python isn’t used for the problem space of speed… Try using PHP for data science visualisation. Need a library to solve a problem? Ruby probably doesn’t have it. Need to write a quick script to transform some data? Better spend more time than doing it manually as you accidentally allocate memory incorrectly in C++. Want to scrape a website for some text? Enjoy tonnes of boilerplate and 30 lines of Java vs the three lines of Python.

    Python isn’t about speed, it’s not even about being the best at anything, it’s about being good enough at pretty much any task your average coder needs to pull off in as short a time as possible, all with a single language. Plus, it’s way easier to bounce to another language from Python should you need to (for instance it’s much easier to write a speed-critical component in Go vs the entire application stack in Go - Python has a much lower cognitive load and developers can easily extend or refactor thanks to Pythonic code).

    The amount of Python libraries that are actually written in C is huge and Python makes no attempts to suggest it’s more performant than something like C - why do you think Python bindings even exist?

    Thinking speed is a ding on Python shows a gross lack of knowledge of Python and its purpose.

    DefinitelyNotBirds ,
    @DefinitelyNotBirds@lemmy.world avatar

    deleted_by_author

  • Loading...
  • fartsparkles ,

    What are you even on about? I’ve been writing code likely long before you were even born. And by your post history of drugmemes and suggesting Python scripts to people, I’m concerned you don’t know what you’re on about either.

    DefinitelyNotBirds ,
    @DefinitelyNotBirds@lemmy.world avatar

    deleted_by_author

  • Loading...
  • skulblaka ,
    @skulblaka@kbin.social avatar

    Incredible, DefinitelyNotBirds coming in with the unprecedented triple down! Unsatisfied with being proven wrong once and being made a fool of once, DefinitelyNotBirds is now making a fool of themselves in order to even the score. Amazing stuff, folks. Tune in for the next round of methposting and see what happens next!

    DefinitelyNotBirds ,
    @DefinitelyNotBirds@lemmy.world avatar

    deleted_by_author

  • Loading...
  • fartsparkles ,

    Ah, I see you’ve discovered the secret loophole to being infallible: simply declare it with a “lol” at the end! 🤣

    This witty retort actually brought to you by ChatGPT (I made an account just for you).

    Edit: Actually, here’s some more as they’re great:

    Well, aren’t you the Picasso of self-assurance, splashing confidence all over the place! 🎨😄

    Ah, I see you’ve mastered the art of selective attention! Impressive! 😄

    Ah, the master of self-certainty and the commander of error-free expeditions!

    Your unshakable confidence is truly a sight to behold, a shining monument to perfection! Bravo!

    Your unwavering conviction could power a small village with its renewable energy!

    I’m taking notes on your exceptional ability to dance gracefully between fact and fiction!

    They say humility is overrated, and you, my friend, are living proof of that maxim!

    adhdplantdev ,

    Tell me you have no idea what programming languages are used for without telling me you have no idea what programming languages are used for.

    DigitalWebSlinger ,

    Too many negative words for chatgpt, imo. “isn’t”, “not”, etc, chatgpt is usually positive and friendly to a fault.

    Maybe you could provide a prompt that would output something substantially similar to what they wrote?

    LoafyLemon ,

    That's rich coming from you when your first message was written by ChatGPT.

    lonke ,

    Not everyone with an attention span longer than yours is ChatGPT.

    Assuming you’ve even read THIS far, that comment lacks all of the common gpt hallmarks.

    magic_lobster_party ,

    Try training a deep learning model on your GPU with some other programming language.

    eager_eagle ,
    @eager_eagle@lemmy.world avatar

    that’s like making a snarky comment about how shovels are terrible at hitting nails

    DefinitelyNotBirds , to technology in Understanding Immortal Objects in Python 3.12: A Deep Dive into Python Internals
    @DefinitelyNotBirds@lemmy.world avatar

    !Breaking News: Python Sets New Record in Slowness Competition! Participants in awe of how Python lags behind C++, Java, C#, Ruby, Go and PHP in the race to the finish line. Stay tuned as we watch Python take its leisurely stroll in the world of programming speed!!!

    Kerfuffle ,

    Participants in awe of how Python lags behind C++, Java, C#, Ruby, Go and PHP

    Comparing Python to compiled languages is like C++ is pretty unreasonable.

    xep ,

    You will appreciate this news then since it’s an optimization that’ll make it faster.

    InvertedParallax ,

    It’s python, it’s supposed to be slow.

    Assembly runs the hardware, c kernel runs the assembly, c++ runs the libraries on the kernel, python runs rhe c++, the human runs the python.

    Used to write all my uis in c++, but the turnaround time for python is just incredible, write the business logic there, if it turns out you’re using it to much lower it to c++.

    tdawg , to technology in How CPython Implements and Uses Bloom Filters for String Processing

    On one hand reading articles and blogs always seems to be the best place to find the best information about tech topics, but on the other hand my ever growing list of articles to read is starting to feel like a super task

    abhi9u OP ,

    I have the same problem. The number of things I want to read and write about is scaling faster than I can tackle them :)

    drre , to technology in An Analysis of DeepMind's 'Language Modeling Is Compression' Paper

    does anyone know whether these results were obtained while taking the size of the dictionary into account?

    abhi9u OP ,

    Do you mean the number of tokens in the LLM’s tokenizer, or the dictionary size of the compression algorithm?

    The vocab size of the pretrained models is not mentioned anywhere in the paper. Although, they did conduct an experiment where they measured compression performance while using tokenizers of different vocabulary sizes.

    If you meant the dictionary size of the compression algorithm, then there was no dictionary because they only used arithmetic coding to do the compression which doesn’t use dictionaries.

    AbouBenAdhem ,

    It looks like they did it both ways (“raw rate” vs “adjusted rate”):

    In the case of the adjusted compression rate, the model’s size is also added to the compressed size, i.e., it becomes (compressed size + number of model parameters) / raw size. This metric allows us to see the impact of model parameters on the compression performance. A very large model might be able to compress the data better compared to a smaller model, but when its size is taken into account, the smaller model might be doing better. This metric allows us to see that.

    abhi9u OP ,

    Yes. They also mention that using such large models for compression is not practical because their size thwarts any amount of data you might want to compress. But this result gives a good picture into how generalized such large models are, and how well they are able to predict the next tokens for image/audio data at a high accuracy.

    AbouBenAdhem , to technology in An Analysis of DeepMind's 'Language Modeling Is Compression' Paper

    I wonder if this is actually comparable to the way our brains store long-term memory?

    abhi9u OP ,

    Interesting. I’m just thinking aloud to understand this.

    In this case, the models are looking at a few sequence of bytes in their context and are able to predict the next byte(s) with good accuracy, which allows efficient encoding. Most of our memories are associative, i.e. we associate them with some concept/name/idea. So, do you mean, our brain uses the concept to predict a token which gets decoded in the form of a memory?

    AbouBenAdhem ,

    Firstly—maybe what we consider an “association” is actually an indicator that our brains are using the same internal tokens to store/compress the memories.

    But what I was thinking of specifically is narrative memories: our brains don’t store them frame-by-frame like video, but rather, they probably store only key elements and use their predictive ability to extrapolate the omitted elements on demand.

    abhi9u OP ,

    Yes, that makes much more sense.

    GenderNeutralBro ,

    This seems likely to me. The common saying is that “you hear what you want to hear”, but I think more accurately it’s “you remember what has meaning to you”. Recently there was a study that even visual memory was tightly integrated with spoken language: www.science.org/doi/10.1126/sciadv.adh0064

    However, there’s a lot of variation in memory among humans. See: The Mind of a Mnemonist.

    InvertedParallax ,

    No, because our brains also use hierarchical activation for association, which is why if we’re talking about bugs and I say “I got a B” you assume its a stinging insect, not a passing grade.

    If it was simple word2vec we wouldn’t have that additional means of noise suppression.

    xhduqetz , to programming in A Linear Algebra Trick for Computing Fibonacci Numbers Fast

    At the same time, it is also worth noting that the closed form formula is working with irrational numbers which can only be represented approximately in computers, and thus in some rare cases the method may produce incorrect result due to approximation errors.

    I’m nitpicking, but golden ratio can actually be represented exactly in computers. This is because the golden ratio is not merely an irrational number, but also an algebraic number. By definition, any real algebraic number can be represented as an integer vector, which contains a polynomial and two rationals that identify a root of the polynomial. Alas, the multiplication of algebraic numbers is quite involved and certainly far slower than the linear algebra approach for Fibonacci numbers.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines