Counterpoint - yes.
Counterpoint - yes.
Agreed. I’m in my 40s, and I’ve never seen anywhere near the level of subsurface signaling and intentional complacency we’re experiencing now.
Well, terrorists became boring, and they still want the loony wing of the GOP’s clicks, so best to back off on Nazis and pro-Russians, leaving pedophiles as the safest bet.
At first glance, I probably thought JXL was another attempt at JPEG2000 by a few bitter devs, so I had ignored it.
Yeah, my examples/description was more intended to be conceptual for folks that may not have dealt with the nitty gritty. Just mental exercises. I’ve only done a small bit of image analysis, so I have a general understanding of what’s possible, but I’m sure there are folks here (like you) that can waaay outclass me on details.
These intermediate-to-deep dives are very interesting. Not usually my cup of tea, but this does seem big. Thanks for the info.
(fair warning - I go a little overboard on the examples. Sorry for the length.)
No idea on the details, but apparently it’s more efficient for multithreaded reading/writing.
I guess that you could have a few threads reading the file data at once into memory. While one CPU core reads the first 50% of the file, and second can be reading in the second 50% (though I’m sure it’s not actually like that, but as a general example). Image compression usually works some form of averaging over an area, so figuring out ways to chop the area up, such that those patches can load cleanly without data from the adjoining patches is probably tricky.
I found this semi-visual explanation with a quick google. The image in 3.4 is kinda what I’m talking about. In the end you need equally sized pixels, but during compression, you’re kinda stretching out the values and/or mapping of values to pixels.
Not an actual example, but highlights some of the problems when trying to do simultaneous operations…
Instead of pixels 1, 2, 3, 4 being colors 1.1, 1.2, 1.3, 1.4, you apply a function that assigns the colors 1.1, 1.25, 1.25, 1.4. You now only need to store the values 1.1, 1.25, 1.4 (along with location). A 25% reduction in color data. If you wanted to cut that sequence in half for 2 CPUs with separate memory blocks to read at once, you lose some of that optimization. Now CPU1 and CPU2 need color 1.25, so it’s duplicated. Not a big deal in this example, but these bundles of values can span many pixels and intersect with other bundles (like color channels - blue can be most efficiently read in 3 pixels wide chunks, green 2 pixel wide chunks, and red 10 pixel wide chunks). Now where do you chop those pixels up for the two CPUs? Well, we can use our “average 2 middle values in 4 pixel blocks” approach, but we’re leaving a lot of performance on the table with empty or useless values. So, we can treat each of those basic color values as independent layers.
But, now that we don’t care how they line up, how do we display a partially downloaded image? The easiest way is to not show anything until the full image is loaded. Nothing nothing nothing Tada!
Or we can say we’ll wait at the end of every horizontal line for the values to fill in, display that line, then start processing the next. This is the old waiting for the picture to slowly load in 1 line at a time cliche. Makes sense from a human interpretation perspective.
But, what if we take 2D chunks and progressively fill in sub-chunks? If every pixel is a different color, it doesn’t help, but what about a landscape photo?
First values in the file: Top half is blue, bottom green. 2 operations and you can display that. The next values divide the halves in half each. If it’s a perfect blue sky (ignoring the horizon line), you’re done and the user can see the result immediately. The bottom half will have its values refined as more data is read, and after a few cycles the user will be able to see that there’s a (currently pixelated) stream right up the middle and some brownish plant on the right, etc. That’s the image loading in blurry and appearing to focus in cliche.
All that is to say, if we can do that 2D chunk method for an 8k image, maybe we don’t need to wait until the 8k resolution is loaded if we need smaller images for a set. Maybe we can stop reading the file once we have a 1024x1024 pixel grid. We can have 1 high res image of a stoplight, but treat is as any resolution less than the native high res, thanks to the progressive loading.
So, like I said, this is a general example of the types of conditions and compromises. In reality, almost no one deals with the files on this level. A few smart folks write libraries to handle the basic functions and everyone else just calls those libraries in their paint, or whatever, program.
Oh, that was long. Um, sorry? haha. Hope that made sense!
Oh, I’ve just been toying around with Stable Diffusion and some general ML tidbits. I was just thinking from a practical point of view. From what I read, it sounds like the files are smaller at the same quality, require the same or less processor load (maybe), are tuned for parallel I/O, can be encoded and decoded faster (and there being less difference in performance between the two), and supports progressive loading. I’m kinda waiting for the catch, but haven’t seen any major downsides, besides less optimal performance for very low resolution images.
I don’t know how they ingest the image data, but I would assume they’d be constantly building sets, rather than keeping lots of subsets, if just for the space savings of de-duplication.
(I kinda ramble below, but you’ll get the idea.)
Mixing and matching the speed/efficiency and storage improvement could mean a whole bunch of improvements. I/O is always an annoyance in any large set analysis. With JPEG XL, there’s less storage needed (duh), more images in RAM at once, faster transfer to and from disc, fewer cycles wasted on waiting for I/O in general, the ability to store more intermediate datasets and more descriptive models, easier to archive the raw photo sets (which might be a big deal with all the legal issues popping up), etc. You want to cram a lot of data into memory, since the GPU will be performing lots of operations in parallel. Accessing the I/O bus must be one of the larger time sinks and CPU load becomes a concern just for moving data around.
I also wonder if the support for progressive loading might be useful for more efficient, low resolution variants of high resolution models. Just store one set of high res images and load them in progressive steps to make smaller data sets. Like, say you have a bunch of 8k images, but you only want to make a website banner based on the model from those 8k res images. I wonder if it’s possible to use the the progressive loading support to halt reading in the images at 1k. Lower resolution = less model data = smaller datasets to store or transfer. Basically skipping the downsampling.
Any time I see a big feature jump, like better file size, I assume the trade off in another feature negates at least half the benefit. It’s pretty rare, from what I’ve seen, to have improvements on all fronts.
Even better, this must be fantastic when you’re training AI models with millions of images. The compression level AND performance should be a game changer.
Ah, yes, you’re right! Thanks for that.
smh
That’s fucking tragic. Makes me want to whip out the ole Hacker Manifesto.
Kids will never again know the fun of dealing with long distance calling plans and the barely usable international calling that used to cost half you rent for a 15 minute conversation.
Probably based on the Cap’n Crunch whistle pay phone hack.
Someone correct me if I’ve missed a few bits, but here’s the story…
First, a little history.
Payphones were common. If you’re younger, you’ve probably seen them in movies. To operate them, you picked up the handset, listened for the dial tone (to make sure no one yanked the cord loose), inserted the amount shown by the coin slot, and then dialed. You have a limited amount of time before an automatic message would ask you to add more money. If you dialed a long distance number, a message would play telling you how much more you needed to insert.
There were no digital controls to this - no modern networking. The primitive “computers” were more like equipment you’d see in a science class. So, to deal with the transaction details, the coin slot mechanism would detect the type of coin inserted, mute the microphone on the handset, and transmit a series of tones. Just voltage spikes. The muting prevented the background noise from interfering with the signal detection. Drop a quarter in the slot and you’d hear the background noise suddenly disappear followed by some tapping sounds (this was just bleed through).
It’s also relevant to know that cereals used to include a cheap, little toy inside. At one point, Cap’n Crunch had a whistle which had a pitch of 2600Hz.
The story goes that someone* figured out that the tones sent by the payphones were at 2600Hz - same as the whistle. You could pick up a payphone handset and puff into the whistle a certain number of times, and ti would be detected as control signals (inserting money).
That’s right! Free phone calls to anywhere. I’m hazy on the specifics, but I’m pretty sure there were other tricks you could do, like directly calling restricted technician numbers, too. The reason the 2600Hz tone was special had to do with something like it was used as a general signal that didn’t trigger billing.
It knocked the idea of phone hacking, or “phreaking”, from a little known quirk, to an entire movement. Some of the stuff was wild and if you’re interested, look up the different “boxes” that people distributed blueprints for. Eventually, the phone companies caught on and started making it harder to get at wires and more sophisticated coin receptacles.
If you’ve ever seen the magazine 2600 back in the 90s and early 00s, that’s the origin of the name.
All that is to say, if you knew nothing about technology and watched a guy whistle into a phone to get special access, you’d probably be freaked out. Who knows what that maniac could do with a flute!
True. They created their own problem by trying to up each other’s lumens claims over and over to the point where decent flashlights are claimed to have 5.6 million lumens and included 25000mAh 18650s.
Most of the $5+ flashlights are probably fine for most people’s needs. I have several and they’ve been fine for me. Different models, similar modes, similar brightness, and all fine for walking the dog or if the power goes out. Now, if I were relying on them for survival, I might think twice. All have held up fine, including the 12 year old one from dealextreme (pre-alibaba). But, since I don’t know if people are asking for recommendations where spec accuracy matters, I’m hesitant to recommend them to random people on the internet.
(I had to check, just for fun, and there are 18650 batteries listed as 19900mAh. Pretty impressive, since Panasonic is capped out at 3500-3600.)
Yeah, it really caught me off guard the first time I used the site. It was during one of those special celebration discount days where they had the audacity to mark items as literally $0.01 when basically nothing was that price.
For 3D printer filament, which is usually bought in 1kg/2.2lb spools, most places list a 2m sample or a 250g spool to game the search. And my other favorite is the whack-a-mole shipping setup where on variation might be free shipping, but choose a different color and the shipping jumps to $300+.
With Amazon, I’m seeing a ton more overpriced items discounted to still higher priced than their competition. If you look at their deals pages, you can find things like portable monitors for $70 (down from $150), but checking that category shows the same monitor (same specs under a different name) for $60.
Here’s as close as I can find right now, since all the lightning deals are ending for the day. There’s a USB laptop docking station that’s “discounted” from $139 to $70. There isn’t an exact match (there usually is), but similar products go for ~$60-$70 (2 HDMI, 4+ USB3 ports, 100W PD, ethernet). What’s funnier is that the specific company’s Amazon site has at least 4 identical docks at slightly different prices.
I just tried it again on desktop and it worked, but the reason was that I downloaded an extension a while ago and forgot about it. When I disabled the extension, it stopped working.
There used to be a way to enable installing any extension on mobile FFx Dev, but I’m not sure if that still works. The desktop extension just changes the user agent string, so that might be another route to enabling it.
I use the ChatGPT feature from desktop Firefox with no problems. Maybe it specifically denies Chrome, in which case I bet you could change the user agent string and get it to work.
I use AliExpress for electrical parts (except anything with memory), 3D printer parts, and small crap I don’t mind waiting for, but never anything I would be angry about if it never arrived. Also, nothing I consume or wear or need for safety, and I’m wary of anything that’s supposed to be plugged into the wall for long periods of time unattended.
I wouldn’t say I’ve been surprised, but my expectations are low. It’s all cheap stuff, but as long as you’re not needing the stuff you buy, it’s fine. Dollar store quality with the scent of plastic and cigarettes.
That being said, beware of scams. The one that seems acceptable to them is to list one cheap part for the listing, along with variations of the full device. That way it looks like the lowest price in search results, but when you click it, the selected variation is the cheap part. Like, you’ll search for “pliers set” and see a listing for $1, compared to others around $15. When you select it, the product page will have a carrying case for $1 and the various pliers for twice as much as the competition. What’s better is that the case will be selected automatically, not the thing in the picture you clicked on or the picture you see first in the product pages’ gallery.
There are also scam stores that pop up with super low prices compared to others on the site can disappear overnight and the cancellation/refund process is a super pain. Contact customer service once and just submit a claim with your CC company. Their refund process will try to keep telling you to wait for another week, and that includes the reps you get on chat. If you’re suspicious and still order, always follow the shipping info. They will estimate a reasonable delivery date, you’ll get a shipping notification, but it will sit in limbo. The shipping folks are separate from the scammers, so if you see the package actually move towards a shipping center, you’re in the clear. If it says they received shipping information for over a week, you got screwed.
Ignore flash drives/SSDs, batteries, and assume any flashlights are 1/100th the brightness claimed (literally). Oh, and watch shipping costs. Something with free shipping can be 10x the price of the product if you add a second one to your cart.
I wonder if that was born of the Dogecoin tipping system that was around for a while in… 2017/2018? I forget.
I’m pretty sure they thought the awards/gilding was going to be their best bet to Moneyville after Premium flopped. It’s basically just a rebranding with the ability to gift it.
Since the Snoopocalypse I’ve been using it MUCH more. I’m as surprised as anyone, but without Reddit, Google is complete hot garbage. I used to use Google 95% of the time and didn’t realize how many times I gave up and added “reddit” in the query. It’s unusable.
Out of principle, I’ve made SearXNG my default, but I don’t shun Bing at all now. I occasionally use DDG, but anything relatively technical just doesn’t come up much there.
Don’t worry about possibly not being the first person to post this thought. I had the same thought when I saw the headline and then thought the same thing as the last line before reading the body. Against my better judgement, I thought I wouldn’t be the first to post a comment on this, but I actually am! Nice to know there is a bottom of this hole.
I would assume you could apply for an exemption. All they would have to do is set some “your non-structure obstructed property must be this big” number and check the survey.
Worst case: Maybe they’ll be lazy and include the building and driveway/parking lot, in which case you’d have to appeal with some pics or some other proof at worst. Have more requirements to reach, like max dB, specific exemption hours, etc.
Best case: They check a Google Earth view before finalizing the denial, saving you the appeal, and you get the thumbs up.
Betterest-best best case: They only care about it if someone complains or they’re at the property for some other reason.
Edit: I’m talking from an average Joe’s perspective. The real best case is that everyone moves to electric lawn tools and minimal manicured lawns, but unless they’re giving out free upgrades, it doesn’t seem reasonable to just flip a switch with little warning. I COULD see them setting a cut off date a few years out, where they do a complete ban after the partial one.
I see nothing fnord unusual about it.