

Either poorly-federated instance, or you look in the wrong place? Here’s a good one: https://peertube.wtf/videos/browse?live=false
Either poorly-federated instance, or you look in the wrong place? Here’s a good one: https://peertube.wtf/videos/browse?live=false
Also, lazier. I’m more likely to stick with information from the first 1-3 search results I decided to click, while AI will parse and summarize dozens in fraction of time I spend reading just one.
In this study they asked to replicate 1:1 headline publisher and date. So for example if AI rephrased headline as something synonymous it would be considered at least partially incorrect. Summarization doesn’t require accurate citation, so it needs a separate study.
I use it instead of search most of the time nowadays. Why? Because it does proceed to google it for me, parse search results, read the pages behind those links, summarize everything from there, present it to me in short condensed form and also provide the links where it got the info from. This feature been here for a while.
People came to Lemmy explicitily because Reddit bans you for disliking billionaires now.
It’s not that I like them or anything, but it’s very irrelevant to my motivations to use fediverse.
A bit offtopic, but why would anyone want to keep their instance in line with local laws? Aren’t internet sites operating under jurisdiction of where they are hosted? Or is it just some coincidence that those people decided to host their stuff at datacenters at their local proximity? When I’m choosing hosting the first thing I think about: “hmmm I shouldn’t host in country where I live because I don’t want to ever have any problem with local authorities, and if I host elsewhere authorities there won’t be able to reach me physically so the worst thing that could happen is the site gets shut down”.
I also asked ChatGPT itself, and it listed a number of approaches, and one that sounded good to me is to pin layers to GPUs, for example we have 500 GPUs: cards 1-100 have permanently loaded layers 1-30 of AI, cards 101-200 have permanently loaded layers 31-60 and so on, this way no need to frequently load huge matrices itself as they stay in GPUs permanently, just basically pipeline user prompt through appropriate sequence of GPUs.
So do they load all those matrices (totalling to 175b params in this case) to available GPUs for every token of every user?
That’s how llms work. When they say 175 billion parameters, it means at least that many calculations per token it generates
I don’t get it, how is it possible that so many people all over the world use this concurrently, doing all kinds of lengthy chats, problem solving, codegeneration, image generation and so on?
Should be “starting your own instance”, because otherwise you still have to conform to the rules of the instance you create your community/sub on.
It is what it feels like, but it’s not really 100% this way (yet). It is a bad self-reinforcing cognitive bias: we think “forums are dead, that’s why we stick to the sitename” instead of actually finding dozens of still alive forums and going there, in turn sitename gets more populated while forums feel more dead. But there are still plenty alive. Also, there are relatively new kinds of forums which sometimes work very well for their niche, like Discourse communities for example.
After you use ChatGPT for a bit, you will start recognizing its style of writing in posts and comments. I’ve seen dozens of obviously ChatGPT generated posts or replies on Reddit and Lemmy. Usually there will be a person who already replied to them something like “Thanks, ChatGPT”, because it is that obvious. This only happens with naive prompts though, if you ask ChatGPT to present its answer to your prompt in a different style (for example, mimic some famous writer, or being cheerful/angry/excited and avoid overly safe language), it will immediately start writing differently and there’s likely no limit on variety of writing styles you can pull out of it with enough effort of just asking it to write this or that way.
Yeah, got a bit carried away yesterday. Ultimately, there can’t be right or wrong, since the whole discussion is simply about individual understanding or even preferences for the term “slop”, nothing more. Some people will try to be as empirical as possible and choose a meaning based on how they’ve seen it being used in the wild, others will try to push the meaning they want for the term, it’s all good and subjective.
This is where the fundamental difference in attribution of connotations lies. From what you say, you perceive the term “slop” as a direct synonym to “low quality”, without any extras. I perceive it as something more of a synonym to “repetitive” but with extra connotations, the most accurate common divisor of which is “repetitive content produced at speeds suggesting low effort”.
I understand your point of view. But would you call something complex, high-quality, but repetitive, slop? And the same question, but if the person who produces it, does it extremely fast.
How would you use that term? Would you call “slop” something that was just mindlessly generated using AI in a single prompt and non-“slop” something that uses AI in more sophisticated/deliberate ways? What is the threshold of something being “slop” ultimately? Is this just result not looking decent enough or amount of effort combined with amount of knowledge and experience that was used to create that? I’m personally conflicted on this, because sometimes even mindless prompt may give great result, and sometimes a lot of manual effort may give shit result. I guess with “slop” I tend to gravitate towards “amount of effort combined with amount of knowledge and experience that was used to create” and perhaps also the amount of content that particular person produces and speed of its production. So if someone is really good with some tools (not necessarily AI) and figured some overpowered shortcuts that allow to produce results very fast with little effort, it also can be called “slop” just for the rate of production alone.
Never used Substack, can someone please explain how this is different from lets say Medium?
LM Studio looks cool, but I wonder, why their GUI app isn’t open-source? Also their site has careers section, where do they get money to operate like that? Couldn’t find anything about their monetization model.