There’s a transaction fee, the higher you pay the more priority you have (since miners get a cut).
There’s a transaction fee, the higher you pay the more priority you have (since miners get a cut).
The vulnerability has nothing to do with accidentally logging sensitive information, but crafting a special payload to be logged which gets glibc to write memory it isn’t supposed to write into because it didn’t allocate memory properly. glibc goes too far outside of the scope of its allocation and writes into other memory regions, which an attacked could carefully hand craft to look how they want.
Other languages wouldn’t have this issue because
they wouldn’t willy nilly allocate a pointer directly like this, but rather make a safer abstraction type on top (like a C++ vector), and
they’d have bounds checking when the compiler can’t prove you can go outside of valid memory regions. (Manually calling .at() in C++, or even better - using a language like rust which makes bounds checks default and unchecked access be opt in with a special method).
Edit: C’s bad security is well known - it’s the primary motivator for introducing rust into the kernel. Google / Microsoft both report 70% of their security vulnerabilities come from C specific issues, curl maintainer talks about how they use different sanitizers and best practices and still run into the same issues, and even ubiquitous and security critical libraries and tools like sudo + polkit suffer from them regularly.
The solution here generally afaik is to give a specific deadline before you go public. It forces the other party to either patch it, or see the problem happen when they go live. 90 days is the standard timeframe for that since it’s enough time to patch and rollout, but still puts pressure on making it happen.
Not in this one, iirc they actually reverse engineered and were working off of apple libraries, rather than proxies.
manpages aren’t guides though - they don’t help much in learning new tools, especially complicated ones. They’re comprehensive references, some can literally span hundreds of pages. Useful when you know what you’re doing and what you’re looking for, not great for learning new tools.
In which case the -a isn’t needed.
Better have not created any new files tho - git commit -a doesn’t catch those without an add first.
This is good for precisely the single user case - potentially malicious services on your system can’t view things they otherwise would be able to, or access resources they don’t need. Even if it’s under the same user.
As a Linux user (and ex arch user btw), I’m deeply offended.
Linus has stepped away from kernel development before, and probably will again. Life continues on.
Second person excited for bcachefs, I’m planning on swapping over as soon as it supports scrubbing.
Right, but squashed commits don’t scale for large PRs. You could argue that large PRs should be avoided, but sometimes they make sense. And in the case where you do have a large PR, a commit by commit review makes a lot of sense to keep your history clean.
Large features that are relatively isolated from the rest of the codebase make perfect sense to do in a different branch before merging it in - you don’t merge in half broken code. Squashing a large feature into one commit gets rid of any useful history that branch may have had.
Yeah, but phabricator and Gerrit are entirely separate workflows from GitHub, and a lot of people prefer that workflow because it leads to encouraging better histories and reviews. It helps you in getting rid of the “fixed typos” type of commits, while still letting you make larger PRs.
GitHub obviously does let you keep a clean git history, but the code review workflow in GH just doesn’t encourage reviewing commits.
How much of that is what GitHub encourages and how much of that is what Users prefer? Plenty of users seem to enjoy phabricator / Gerrit for code review in practice precisely because of their workflows.
Also, GitHub PRs atleast to me feel like they encourage reviewing changes by the total diff of the entire PR, and not each commit. I don’t want a slog of commits that don’t add any value - it just makes doing things like reverts more annoying. Stuff like Gerrit and phabricator enforce reviews by making you review individual commits / changes / whatever you want to call them and not branch diffs.
The version control system, and all the associated code isn’t tied to any system - yes.
When you make a project with git, what you’re doing is essentially making a database to control a sequence of changes (or history) that build up your codebase. You can send this database to someone else (or in other words they can clone it), and they can make their own changes on top. If they want to send you changes back, they can send you “patches” to apply on your own database (or rather, your own history).
Note: everything here is decentralized. Everyone has the entire history, and they send history they want others to have. Now, this can be a hassle with many developers involved. You can imagine sending everyone patches, and them putting it into their own tree, and vice versa. It’s a pain for coordination. So in practice what ends up happening is we have a few (or often, one) repo that works as a source of truth. Everyone sends patches to that repo - and pulls down patches from that repo. That’s where code forges like GitHub come in. Their job is to control this source of truth repo, and essentially coordinate what patches are “officially” in the code.
In practice, even things like the Linux kernel have sources of truth. Linus’s tree is the “true” Linux, all the maintainers have their own tree that works as the source of truth for their own version of Linux (which they send changes back to Linus when ready), and so on. Your company might have their own repo for their internal project to send to the maintainers as well.
In practice that means everyone has a copy of the entire repo, but we designate one repo as the real one for the project at hand. This entire (somewhat convoluted mess) is just a way to decide - “where do I get my changes from”. Sending your changes to everyone doesn’t scale, so in practice we just choose who everyone coordinates with.
Git is completely decentralized (it’s just a database - and everyone has their own copy), but project development isn’t. Code forges like GitHub just represent that.
Only for it’s child processes, e.g. call a bash script with a modified PATH. Still problematic though.
…I suppose they could also modify your .bashrc equivalent.
Idk about everyone else but I was fine with the specs. A basic Linux machine that can hook up to the network and run simple python scripts was plenty for a ton of use cases. They didn’t need to be desktop competitors. The market didn’t need to be small form factor high performance machines, and I’d argue it wasn’t.
You can do rollbacks if you’re using something like home-manager on a foreign distribution. It’s just a bit more janky admittedly.