He may very well be the personality to cause the Epstein files be released.
- 0 Posts
- 65 Comments
unwarlikeExtortion@lemmy.mlto World News@lemmy.world•Proposals for commercial planes to operate with one pilot shelved after critical EU reportEnglish6·20 days agoPeople flying planes?
First put Copilot as the copilot. Then yeet the pilot as well.
3 LLMs duking it out with people in the cargo hold of a winged tin cylinder seems like a genius idea.
unwarlikeExtortion@lemmy.mlto Mildly Infuriating@lemmy.world•The sheer amount of websites that are completely unusable without JavaScriptEnglish32·22 days agoJS is just a janky hotfix.
As it was, HTML was all sites had. When these were called “ugly”, CSS was invented for style and presentation stuff. When the need for advanced interactivity (not doable on Internet speeds of 20-30 years ago), someone just said “fuck it, do whatever you want” and added scripting to browsers.
The real solution came in the form of HTML5. You no longer needed, and I can’t stress this enough, Flash to play a video in-browser. For other things as well.
Well, HTML5 is over 15 years old by now. And maybe the time has come to bring in new functionality into either HTML, CSS or a new, third component of web sites (maybe even JS itself?)
Stuff like menus. There’s no need for then to be limited by the half-assed workaround known as CSS pseudoclasses or for every website to have its own JS implementation.
Stuff like basic math stuff. HTML has had forms since forever. Letting it do some more, like counting down, accessing its equivalent of the Date and Math classes, and tallying up a shopping cart on a webshop seems like a better fix than a bunch of frameworks.
Just make a standardized “framework” built directly into the browser - it’d speed up development, lower complexity, reduce bloat and increase performance. And that’s just the stuff off the top of my head.
unwarlikeExtortion@lemmy.mlto Mildly Infuriating@lemmy.world•The sheer amount of websites that are completely unusable without JavaScriptEnglish21·22 days agoSource code is the code devs write.
For compiled languages like C, only the compiled machine code is made available to the user.
JS is interpreted, meaning it doesn’t get compiled, but an interpreter interprets source code directly during runtime.
Obfuscsted code, while not technically unaltered source code is still source code. Key word being unaltered. It isn’t source code due to the virtue of not being straight from the source (i.e. because it’s altered).
However, obfuscated code is basically source code. The only things to obfuscate are variable and function names, and perhaps some pre-compile order of operations optimizations. The core syntax and structure of the program has to remain “visible”, because otherwise the interpreter couldn’t run the code.
Analyzing obfuscated code is much closer to analyzing source code than reverse-engineering compiled binaries.
It may not be human-readable. But other programs systems can analyze (as they can even compiled code), but more importantly - they can alter it in a trivial manner. Because it’s source code with basically names censored out. Which makes evaluating the code only a bit harder than if it were truly “closed-source”.
That’s why website source code is basically almostsource-available.
A reminder, in the past, large pages downloaded all stuff at once. In contrast, with dynamic imports the first load is much much faster. And that matters most. And any changes in dynamic content would just require the dynamic data to be downloaded.
Unfortunately, you’re very mistaken.
In the past, pages needed to download any stuff they want to display to the user. Now, here’s the kicker: that hasn’t changed!
Pages today are loaded more dinamically and sensibly. First basic stuff (text), then styles, then scripts, then media.
However, it’s not Angular, React Bootstrap or any other framework doing the fetching. It’s the browser. Frameworks don’t change that. What they do, instead, is add additional megabytes of (mostly) bloat to download every day or week (depending on the timeout).
Any web page gets HTML loaded first, since the dawn of the Web. That’s the page itself. Even IE did that. At first, browsers loaded sequentially, but then they figured out it’s better UX to load CSS first, then the rest. Media probably takes precedence to frameworks as well (because thet’s what the user actually sees).
Browsers are smart enough to cache images themselves. No framework can do it even if it wanted to because of sandboxing. It’s the browser’s job.
What frameworks do is make devs’ lives easier. At the cost of performance for the user.
That cost is multiple-fold: first the framework has to load. In order to do that, it takes bandwidth, which may or may not be a steeply-priced commodity depending on your ISO contract. Loading also takes time, i.e. waiting, i.e. bad UX.
Other than that, the framework beeds to run. That uses CPU cycles, which wastes power and lowers battery life. It’s also less efficient than the browser doing it because it’s a higher level of abstraction than letting the browser do it on its own.
With phones being as trigger-happy about killing “unused” apps, all the frameworks in use by various websites need to spin up from being killed as often as every few minutes. A less extreme amount of “rebooting” the framework happens when low-powered PCs run oit of RAM and a frameworked site is chosen by the browser to be “frozen”.
What a framework does is, basically, fill a hole in HTML and CSS - it adds functionality needed for a website which is otherwise unattainable. Stuff like cart, checkout, some complex display styles, etc.
All of this stuff is fully doable server-side. Mind you, login is so doable it didn’t even slip onto my little list. It’s just simpler to do it all client-side for the programmer (as opposed to making forms and HTML requests that much more often, together with the tiny UX addition of not needing to wait for the bac(-and-forth to finish.
Which itself isn’t really a problem. In fact, the “white flashes” are more common on framework sites than not.
When a browser loads any site, it loads HTML first. That’s “the site”. The rest is just icing on the cake. First is CSS, then media and JS (these two are havily browser dependent as far as load priority goes).
Now comes the difference between “classic”, “js-enhanced” and “fully js-based” sites.
A classic site loads fast. First HTML. The browser fetches the CSS soon enough, not even bothering to show the “raw HTML” for a few hundered miliseconds if the CSS loads fast enough. So the user doesn’t even see the “white flash” most of the time, since networks today are fast enough.
As the user moves through different pages of the site, the CSS was cached - any HTML page wishing to use the same CSS won’t even need to wait for it to load again!
Then there’s the js-enhanced site. It’s like the classic site, but with some fancy code to make it potentially infinitely more powerful. Stuff like responsive UI’s and the ability to do fancy math one would exoect of a traditional desktop/native app. Having JS saves having to run every little thing needing some consideration to the server when the browser can do it. It’s actually a privacy benefit, since a lot less things need to leave the user’s device. It can even mend its HTML, its internal structure and its backbone to suit its needs. That’s how powerful JS is.
But, as they say, with great power comes great responsibility. The frameworked-to-hell site. Initially, its HTML is pretty much empty. It’s less of like ordering a car and more of building a house. When you “buy the car” (visit the site), it has to get made right in front of your eyes. Fun the first few times, but otherwise very impractical.
A frameworked site also loads slower by default - the browser gets HTML first, then CSS. Since there’s no media there yet, it goes for the JS. Hell, some leave even CSS out of the empty shell of the page when you first enter so you really get blasted by the browser’s default (usually white, although today theme-based) CSS stylesheet. Only once the JS loads the framework can the foundation of the site (HTML) start being built.
Once that’s been built, it has CSS, and you no longer see the white sea of nothing.
As you move through pages of the site, each is being built in-browser, on-demand. Imagine the car turning into a funhouse where whenever you enter a new room, the bell rings. An employee has to hear it and react quickly! They have to bring the Buld-A-Room kit quickly and deploy it, lest you leave before that happens!
Not only is that slow and asinine, it’s just plain inefficient. There’s no need for it in 99% of cases. It slows stuff down, creates needless bandwidth, wastes needless data and wastes energy.
There’s another aspect to frameworked sites’ inefficiency I’d like to touch.
It’s the fact that they’re less “dynamic” and more “quicksand”.
They change. A lot. Frameworks get updates, and using multiple isn’t even unheard of. Devs push updates left and right, which are expected to be visible and deployed faster than the D-Day landings.
Which in practice means that max resource age is set very low. Days, maybe even hours. Which means, instead of having the huge little 15 MB on-average framework fetched once a week or month, it’s more like 4 to dozens of times per week. Multiply by each site’s preferred framework and version, and add to that their own, custom code which also takes up some (albeit usually less-than-frameork) space.
That can easily cross into gigabytes a month. Gigabytes wasted.
Sure, in today’s 4K HDR multimedia days that’s a few minutes of video, but it isn’t 0 minutes of nothing.
My phone also reliably lasts a day without charge. It’s not about my battery being bad, but about power being wasted. Do you think it normal that checking battery use, Chrome used 64% according to the abdroid settings?
You bet I tried out Firefox the very same day. Googling for some optimizations led me down a privacy rabbit-hole. Today I use Firefox, and battery use fell from 64% to 24%. A 40% decrease! I still can’t believe it myself!
I admit, I tend to use my phone less and less so my current 24% may not be the best metric, but even before when I did, the average was somewhere between 25% and 30%.
There’s a middle-ground in all of this.
Where the Web is today is anything but.
The old days, while not as golden they might seem to me are also not as brown as you paint them out to be.
unwarlikeExtortion@lemmy.mlto Mildly Infuriating@lemmy.world•The sheer amount of websites that are completely unusable without JavaScriptEnglish101·22 days agoAs a web dev, and primarily user, I like my phone having some juice left in it.
The largest battery hog on my phone is the browser. I can’t help wonder why.
I’d much rather wait a second or two rather than have my phone initialize some js framework 50 times per day.
Dynamic HTML can be done - and is - server-side. Of course, not using a framework is harder, and all the current ones are client-side.
Saying making unbloated pages is impossible to do right just makes it seem like you’re ill informed.
On that note - “Closed-source” JS doesn’t really exist (at least client-side) - all JS is source-availiable in-browser - some may obfuscate, but it isn’t a privacy concern.
The problem is that my phone does something it doesn’t have to.
Having my phone fetch potentially 50 MB (usually 5-15) for each new website is a battery hog. And on a slow connection - to quote your words, “great UX”.
The alternative is a few KB for the HTML, CSS and a small amount of tailor-made JS.
A few KB’s which load a hundered times faster, don’t waste exorbitant amounts of computing power - while in essence losing nothing over your alternative.
“Old pages with minima style” is a non-sequitur. Need I remind you, CSS is a thing. In fact, it may be more reliable than JS, since it isn’t turing-complete, it’s much simpler for browser interpreters to not fuck it up. Also, not nearly the vulnerability vector JS is.
And your message for me and people like me, wanting websites not to outsource their power-hogging frameworks to my poor phone?
Go build your own browser.
What a joke.
unwarlikeExtortion@lemmy.mlto Mildly Infuriating@lemmy.world•The sheer amount of websites that are completely unusable without JavaScriptEnglish5·22 days agoYou can’t modify the DOM.
But
somemost dynamicity can stay - sites can be built freely server-side, and even some “dynamic” functionality like menus can be made using css pseudoclasses.Sure, you won’t have a Google Docs or Gmail webapp, but 90% of stuff doesn’t actually need one.
A basic website doesn’t require js.
A webshop, for example, does for the part around adding to cart and checkout - but it doesn’t for merely browsing.
unwarlikeExtortion@lemmy.mlto Technology@lemmy.world•Yes, you can store data on a bird — enthusiast converts PNG to bird-shaped waveform, teaches young starling to recall file at up to 2MB/sEnglish14·22 days agoLast time I checked, cloaca was just the back. It is the everything door, though.
unwarlikeExtortion@lemmy.mlto Asklemmy@lemmy.ml•Why does calorie counting work for weight loss and why are calories interchangeable?26·25 days agoCalories are interchangeable like this percisely because a calorie is a unit of energy.
This “energy” we speak of is in stored as chemical potential energy of molecules.
When the human body digests foods, it breaks down molecules to build new ones through chemical reactions. Some such reactions release energy, while others require outside energy to happen. Some molecules are, likewise good stores of energy for the body because they take part in reactions that release energy.
But, at the end of the day, energy is energy. Another type of chemical reactions that release energy is burning. It just so hapoens to be much faster and easier to create and control than the work an ingestive tract does.
The only difference is that burning converts things into a slightly different set of molecules than digestion would (with burning releasing all energy and digestion leavinf some untapped), so energy released by burning isn’t 100% on par to the energy extractable to a human digesting it.
That being said, the difference between the “theoretical” energy (burning) and usable energy (ingestion) isn’t too important. You may put in the 1500 calories on the label, but you won’t utilize all of them. However, taking into account the fact that whenever energy is measured, it’s measured by burning we stay consistent. We may not be 100% percise, but we’re at least consistently wrong. And the amount of unavailiable energy is incredibly small - humans are actually more efficient than machines from an “energy efficiency” standpoint. Given the fact that each person has a different metabolism (and metabolism changes regularily throughout the day, year and with age), neither does trying to be 100% percise make sense, since your values for today will be different from your values for tomorrow.
About losing weight: Weight is lost when energy is taken in, and gained when it used.
Since a human uses about 2000 calories a day, 1500 was discovered as the best middle ground between starving and not gaining weight altogether.
It really doesn’t matter where the calories come from because the only important thing for tracking weight is net energy, gained or lost. 100 calories “trapped” in sugar is the same as 100 calories “trapped in fat”. With the human body being as efficient at sucking out energy out of stuff, the only real difference is in how long the process takes - energy in sugars is practically instantly availiable, while energy in protein takes some time to be extracted.
A net gain or loss of 200 calories is the same, wether it’s through sugars or proteins. But, for the body, it’s all the same. If it has a sufficit of energy it’ll store it (and you’ll have a net weight gain). If it has a deficit, it’ll seem you’ve lost weight, as that energy went into something other than your body’s reserves.
unwarlikeExtortion@lemmy.mlto Technology@lemmy.ml•Authors celebrate “historic” settlement coming soon in Anthropic class action101·25 days agoI don’t think individuals should have to pay - even with their private data
Agree.
[…] and that means companies shouldn’t either.
Disagree.
Whn a person pirates, they usually do it for a) themselves, b) their family or c) a close friend. Some might share on a larger basis.
And other than that, they also usually use it for a) educational or b) entertainment purposes.
For companies, it’s alsmost always d) On a larger basis and c) commercially.
As most licences and contracts differentiate the two uses, so should the law.
The fact that I can download a book online and read it (sneakily, and technically illegally) doesn’t mean that if I became an AI LLC I could download it, along with thousands of others, to then sell as my AI’s “knowledge”.
Making that an AI’s knowledge is “storing in a retrieval system” and commercial use isn’t a free use criterion.
The true problem with (common law) copyright is the fact that it can be bought and sold. Or rather, the author doesn’t own it - the publisher does. Which goes against the initial idea of the author getting dividends from their works.
unwarlikeExtortion@lemmy.mlto United States | News & Politics@lemmy.ml•McDonald's sales are slumping because people can't afford fast-food2·1 month agoThe Free Market at work. If people can’t afford to pay for a burger at McD’s, they won’t - even if they want to.
For the economy to work properly, everyone has to be at least somewhat well-off - not living paycheck-to-paycheck without a dollar to spare for a start.
unwarlikeExtortion@lemmy.mlto Programmer Humor@lemmy.ml•Interviews as seen by HR and the candidate5·4 months agoPaid luches are nice. But if I get the choice between $10.000 yearly more or paid lunches, obviously i’d go for the cash. It’s supposed to be a bonus (i.e. free), and not a way to cut corners and undermine your employees.
Maybe it does do the company some good in terms of retention, but counting on “I’ll save $6k if I spend $4k on lunches per person on average by cutting pay for new hires” is not a good strategy. Same for ping pong tables, horseraces, pizza parties and whatever else.
An interesting way to misspell “subscription”
deleted by creator
You have free will, but you also have chains that bound you.
Starting from the social order, you need money and other social relations (friends, family, bosses) to literally survive in the modern world - you’re not omnipotent.
Then you have the cognitive chains - stuff you know and understand, as well stuff you can invent (or reinvent) from your current knowledge - you are not omnipresent.
Then, as a consequence, without these two, you cannot be (omni)benevolent - you’ll always fuck something up (and even if you didn’t, most actions positive towards something will have a negative impact towards something else).
All these are pretty much categorically impossible to exist - you’re not some god-damn deity.
But does this mean free will doesn’t exist?
Hardly. It’s just not as ultimate a power or virtue as some may put it. Flies or pigs also have free will - they’re free to roll in mud or lick a turd - except for when they’re not because they do it to survive (cool themselves or eat respectively).
We humans similarily eat and shit, and we go to work so we have something to eat and someplace to shit. Otherwise you die without the former or get fined without the latter.
So that’s what free will is - the ability of an organism to guide what it’s doing, how, when (and, to some extent, even why) it’s doing it, according to its senses and sensibilities. It’s the process with which we put our own, unique spin on the things in our lives.
Being an omnipotent, omnipresent and (omni)benevolent would in fact remove the essence of what free will (with all its limits) is, because our actions wouldn’t have any meaningful consequences. It’d all just be an effective (what I’ll call negative) chaos - a mishmush of everything only understandable to the diety.
So in fact, the essence of “free” will is that it’s free within some bounds - some we’ve set ourselves, some we’re forced with (disabilities, cognitive abilities, physical limits, etc.). Percisely in the alternative scenario would “free” will cease to be free - because someone already knows it all - past, present future, local and global, from each atom on up. There’s perfect causality - as perfect as a movie. You can’t change it meaningfully - any changes become a remix or remaster - they lose their originality.
With the limits on our thinking which cause us to be less-than-perfect, they cause a kind of positive chaos, one where one tries to do their best with what they have on their disposal - as they say, you get to know people best at their lowest. Similarily, everyone gets corrupted at a high enough power level - some just do it sooner than others. So surely, at an infinite power level, not even someone omnipotent, omnipresent and (omni)benevolent all at once would be able to curb this flaw.
unwarlikeExtortion@lemmy.mlto Technology@lemmy.ml•Microsoft getting nervous about Europe's tech independence5·5 months agoThe EU is not an alliance, since member states give up a good portion of their sovereignty to the bloc. It’s much closer to a “loosely bound US” than a “NATO on steroids”.
unwarlikeExtortion@lemmy.mlto Technology@lemmy.world•Microsoft CEO says up to 30% of the company's code was written by AI | TechCrunchEnglish4·5 months ago“You are right, I made a mistake. Here is a better answer.” Continues to give wrong answers.
To be fair, the AI’s not wrong. It’s probably better, but just a teeny tiny bit so.
Honestly, AI is like a genie - whatever you come up with he’ll just butcher and misinterpret so you start questioning both your own sanity and the semantics of language. Good thing these genies have no wish limit, but bad thing that they murder rainforests while generating their non-sequitur replies.
unwarlikeExtortion@lemmy.mlto Technology@lemmy.world•Google’s dominance on search is declining – for the first time ever!English13·5 months agothe page will immediately get blocked because it tried to load a result from Reddit or coursehero or something
Does that mean any search (AI insight notwithstanding) will get blocked if it includes a Reddit, Coursera or something on the blocklist result at all?
Because if yes, that’s much more than just asinine. It’s basically blocking entire search topics due to the sheer fact that Reddit will appear on the furst page of Google a lot.
unwarlikeExtortion@lemmy.mlto Asklemmy@lemmy.ml•Whats getting on your nerves at the moment?1·5 months agoTell grandma with Parkinson’s to “adapt”. While not as ubiquitus a disability as daltonism or blindness, interfaces should still cater to people with them.
When it’s kids adapting it’s fun. When it’s someone with tremors physically incapable of gently and precisely tapping the exact 5px, it’s just bad design.
I’ve yet to see an accessibility setting for this very valid usecase.
unwarlikeExtortion@lemmy.mlto Asklemmy@lemmy.ml•"1.32 MB" Is that pronounced, "one-point-three-two" megabytes, or "one-point-thirty-two" megabytes?41·5 months agoI’d say one point thirty-two. As others noted, much depends on geography.
Personally, I say the “actual” number up to 3 or 4 decimal places, with a lot of the reason depending on the specific context. If I had to asses, I’d say I say the “whole” number in over 50% of cases for 3 digits, and in about 10% for 4 digits. Anything over 4 decimal places and I fall back to individual digits.
It is a bit confusing.
Minecraft is a digital simulation of a physical (albeit blocky) world.
If we treat minecraft as a physical world (one simulated, but that’s beside the point), we can claim that it’s a (simulated) physical simulation.