• 0 Posts
  • 13 Comments
Joined 1 year ago
cake
Cake day: April 3rd, 2024

help-circle




  • Jesus_666@lemmy.worldtoLinux@lemmy.mlLinux Users- Why?
    link
    fedilink
    English
    arrow-up
    4
    ·
    12 days ago

    I run Garuda because it’s a more convenient Arch with most relevant things preinstalled. I wanted a rolling release distro because in my experience traditional distros are stable until you have to do a version upgrade, at which point everything breaks and you’re better off just nuking the root partition and reinstalling from scratch. Rolling release distros have minor breakage all the time but don’t have those situations where you have to fix everything at the same time with a barely working emergency shell.

    The AUR is kinda nice as well. It certainly beats having to manually configure/make obscure software myself.

    For the desktop I use KDE. I like the traditional desktop approach and I like being able to customize my environment. Also, I disagree with just about every decision the Gnome team has made since GTK3 so sticking to Qt programs where possible suits me fine. I prefer Wayland over X11; it works perfectly fine for me and has shiny new features X11 will never have.

    I also have to admit I’m happy with systemd as an init system. I do have hangups over the massive scope creep of the project but the init component is pleasant to work with.

    Given that after a long spell of using almost exclusively Windows I came back to desktop Linux only after windows 11 was announced, I’m quite happy with how well everything works. Sure, it’s not without issues but neither is Windows (or macOS for that matter).

    I also have Linux running on my home server but that’s just a fire-and-forget CoreNAS installation that I tell to self-update every couple months. It does what it has to with no hassle.


  • True. I never quite got the concept of “microaggressions” until feeling myself how modest disapproval can become a fucking burden if voiced regularly by someone you can’t avoid. You go from having interests that someone doesn’t happen to share to feeling that everything you care about is invalid and you’ve failed at life.

    It doesn’t help the discussion that a behavior can be perfectly fine normally but can be hurtful to specific people because of specific things that happened to them. So this is a nuanced problem, which already doesn’t bode well for reasonable public discourse. And then we have have assholes who deliberately don’t want the discussion to happen because psychologically vulnerable people are a minority and minorities getting harmed is a desired outcome for them.



  • To quote that same document:

    Figure 5 looks at the average temperatures for different age groups. The distributions are in sync with Figure 4 showing a mostly flat failure rate at mid-range temperatures and a modest increase at the low end of the temperature distribution. What stands out are the 3 and 4-year old drives, where the trend for higher failures with higher temperature is much more constant and also more pronounced.

    That’s what I referred to. I don’t see a total age distribution for their HDDs so I have no idea if they simply didn’t have many HDDs in the three-to-four-years range, which would explain how they didn’t see a correlation in the total population. However, they do show a correlation between high temperatures and AFR for drives after more than three years of usage.

    My best guess is that HDDs wear out slightly faster at temperatures above 35-40 °C so if your HDD is going to die of an age-related problem it’s going to die a bit sooner if it’s hot. (Also notice that we’re talking average temperature so the peak temperatures might have been much higher).

    In a home server where the HDDs spend most of their time idling (probably even below Google’s “low” usage bracket) you probably won’t see a difference within the expected lifespan of the HDD. Still, a correlation does exist and it might be prudent to have some HDD cooling if temps exceed 40 °C regularly.


  • Hard drives don’t really like high temperatures for extended periods of time. Google did some research on this way back when. Failure rates start going up at an average temperature of 35 °C and become significantly higher if the HDD is operated beyond 40°C for much of its life. That’s HDD temperature, not ambient.

    The same applies to low temperatures. The ideal temperature range seems to be between 20 °C and 35 °C.

    Mind you, we’re talking “going from a 5% AFR to a 15% AFR for drives that saw constant heavy use in a datacenter for three years”. Your regular home server with a modest I/O load is probably going to see much less in terms of HDD wear. Still, heat amplifies that wear.

    I’m not too concerned myself despite the fact that my server’s HDD temps are all somewhere between 41 and 44. At 30 °C ambient there’s not much better I can do and the HDDs spend most of their time idling anyway.


  • Honestly, I’m still very much in the “classes define what a tag represents, CSS defines how it looks” camp. While the old semantic web was never truly feasible, assigning semantic meaning to a page’s structure very much is. A well-designed layout won’t create too much trouble and allows for fairly easy consistency without constant repetition.

    Inline styles are essentially tag soup. They work like a print designer thinks: This element has a margin on the right. Why does it have that margin? Who cares, I just want a margin here. That’s acceptable if all you build are one-off pages but requires manual bookkeeping for sitewide consistency. It also bloats pages and while I’m aware that modern web design assumes unmetered connections with infinite bandwidth and mobile devices with infinitely big batteries, I’m oldschool enough to consider it rude to waste the user’s resources like that. I also consider it hard to maintain so I’d only use it for throwaway pages that never need to be maintained.

    CSS frameworks are like inline styles but with the styles moved to classes and with some default styling provided. They’re not comically bad like inline styles but still not great. A class like gap-2 still carries no structural meaning, still doesn’t create a reusable component, and barely saves any bandwidth over inline CSS since it’s usually accompanied by several other classes. At least some frameworks can strip out unused framework code to help with the latter.

    I don’t use SCSS much (most of its best functionality being covered by vanilla CSS these days) but it might actually be useful to bridge the gap between semantically useful CSS classes and prefabricated framework styles: Just fill your semantic classes entirely with @include statements. And even SCSS won’t be needed once native mixins are finished and reach mainstream adoption.

    Note: All of this assumes static pages. JS-driven animations will usually need inline styles, of course.



  • I work for a publicly traded company.

    We couldn’t switch away from Microsoft if we wanted to because integrating everything with Azure and O365 is the cheapest solution in the short term, ergo has the best quarterly ROI.

    I don’t think the shareholders give a rat’s ass about data sovereignty if it means a lower profit forecast. It’d take legislative action for us to move away from an all-Azure stack.

    And yes, that sucks big time. If Microsoft stops playing nice with the EU we’re going to have to pivot most of our tech stack on a moment’s notice.