It means they admit they were wrong and you were correct. As in, “I have been corrected.”
It means they admit they were wrong and you were correct. As in, “I have been corrected.”
The amount of VRAM isn’t really the issue, even an extremely good GPU like the 7900XTX (with 24GB VRAM) struggles with some ray tracing workloads because it requires specially designed hardware to run efficiently
Your first sentence asserts the claim to be proved. Actually it asserts something much stronger which is also false, as e.g. 0.101001000100001… is a non-repeating decimal which doesn’t include “2”. While pi is known to be irrational and transcendental, there is no known proof that it is normal or even disjunctive, and generally such proofs are hard to come by except for pathological numbers constructed specifically to be normal/disjunctive or not.
I’m a hobbyist speed typer (200wpm+), generally prefer linear switches. I do bottom out almost always. To reduce the impact of bottoming out, if this is an issue for you, you can:
use a softer and/or more flexible plate. An aluminum or brass plate is very stiff and will absorb less of the impact compared to an FR4 or polycarbonate plate. The mounting style of the keyboard can also affect this, e.g. a gasket mount has the pcb “floating” on rubber pads that absorb shock, and a plate that is screwed directly into a metal chassis will absorb almost nothing. The plate/pcb can have flex cuts added to improve flexibility and absorb more shock.
use switch springs with a higher actuation force. Common choice is 63.5g or 68g, which is a little heavier than the Akko switches’ ~45g. The spring can also have a variable profile such that the resistance increases more as the spring is depressed, so it kind of cushions the impact a tiny bit. I use extra long springs which has the opposite effect, the response curve is more constant.
use rubber o-rings on the switches. This will make them feel squishy and I don’t really recommend it, but it’s an option if replacing your keyboard isn’t.
FWIW I mostly use an Odin75 keyboard with an FR4 plate and stock alpaca switches. This is gasket mount + soft plate with lots of flex cuts, so it’s a reasonably soft typing experience.
Web of trust
foo terminal
foot
No. sqrt(2) is an irrational number characterized as the positive solution to x^2 - 2 = 0. It’s described by a very small amount of data. Even its decimal expansion can be determined up to any precision by a simple algorithm.
Yeah. Normal whoppers are crunchy. 1 in 4 whoppers is soggy and chewy and hard to eat
Whoppers are good but the risk of getting a bad one is not worth it. Ech
I worked with Progress via an ERP that had been untouched and unsupported for almost 20 years. Damn easy to break stuff, more footguns than SQL somehow
This has nothing to do with Windows or Linux. Crowdstrike has in fact broken Linux installs in a fairly similar way before.
Sure, throw people in jail who haven’t committed a crime, that’ll fix all kinds of systemic issues
Catch and then what? Return to what?
It sounds like you don’t understand the complexity of the game. Despite being finite, the number of possible games is extremely large.
U good?
Your first two paragraphs seem to rail against a philosophical conclusion made by the authors by virtue of carrying out the Turing test. Something like “this is evidence of machine consciousness” for example. I don’t really get the impression that any such claim was made, or that more education in epistemology would have changed anything.
In a world where GPT4 exists, the question of whether one person can be fooled by one chatbot in one conversation is long since uninteresting. The question of whether specific models can achieve statistically significant success is maybe a bit more compelling, not because it’s some kind of breakthrough but because it makes a generalized claim.
Re: your edit, Turing explicitly puts forth the imitation game scenario as a practicable proxy for the question of machine intelligence, “can machines think?”. He directly argues that this scenario is indeed a reasonable proxy for that question. His argument, as he admits, is not a strongly held conviction or rigorous argument, but “recitations tending to produce belief,” insofar as they are hard to rebut, or their rebuttals tend to be flawed. The whole paper was to poke at the apparent differences between (a futuristic) machine intelligence and human intelligence. In this way, the Turing test is indeed a measure of intelligence. It’s not to say that a machine passing the test is somehow in possession of a human-like mind or has reached a significant milestone of intelligence.
I don’t think the methodology is the issue with this one. 500 people can absolutely be a legitimate sample size. Under basic assumptions about the sample being representative and the effect size being sufficiently large you do not need more than a couple hundred participants to make statistically significant observations. 54% being close to 50% doesn’t mean the result is inconclusive. With an ideal sample it means people couldn’t reliably differentiate the human from the bot, which is presumably what the researchers believed is of interest.
I don’t really query, but it’s good enough at code generation to be occasionally useful. If it can spit out 100 lines of code that is generally reasonable, it’s faster to adjust the generated code than to write it all from scratch. More generally, it’s good for generating responses whose content and structure are easy to verify (like a question you already know the answer to), with the value being in the time saved rather than the content itself.
There’s a search field on the front page. The rest is blank because I used the (default) “exact match” option, so the rest of the page is (by random chance) filled with spaces. The search function presumably uses knowledge about the algorithm used to generate the pages to locate a given string in a reasonable amount of time, rather than naively looking through each page.
That’s what the /etc/foo.conf.d/ is for :DDDDD