

I like them, but I don’t know much about playmat quality.
Most regulars at my LGS use them when playing Lorcana.
But the official cardsleeves are not good; most people use thirdparty ones.
I like them, but I don’t know much about playmat quality.
Most regulars at my LGS use them when playing Lorcana.
But the official cardsleeves are not good; most people use thirdparty ones.
I think if anything they would be biased towards having fewer allergies than normal people. Which suggests that 0.21% (1 in 500) is a reasonable bound for how rare a moon dust allergy could be.
Assuming a representative sample, the best point estimate is 1/12 (8.33%), and the 95% confidence interval is 0.21% to 39%.
Longer explanation here: https://lemmy.zip/comment/19753854
The number of allergic people in a population of size N can be modeled as a Binomial(N, p) distribution, where p is the probability that any individual person is allergic.
The maximum likelihood estimate for p when we observe 1 allergic person out of 12 is just 1/12, or 8.33%. This is our best guess if we had to name an exact number.
We can get a 95% confidence interval on the value of p using the Clopper-Pearson method with the following R code:
> binom.test(x=1, n=12, p=1/12)
…
95 percent confidence interval:
0.002107593 0.384796165
…
So we know with 95% confidence that the probability that any individual person is allergic to moon dust is with the range 0.21% and 39%.
Yeah, okay, that’s pretty useless. I agree with them…
This is called context collapse:
Context collapse or “the flattening of multiple audiences into a single context”[1] is a term arising out of the study of human interaction on the internet, especially within social media.[2]Context collapse “generally occurs when a surfeit of different audiences occupy the same space, and a piece of information intended for one audience finds its way to another” with that new audience’s reaction being uncharitable and highly negative for failing to understand the original context.[3]
Neat! I had never heard of this type of chart before, so I looked it up and found this link explaining how they work: https://sixsigmastudyguide.com/xmr-charts/
I think the interpretation of this chart is: In the 2020s, there is a statistically significant change in how many people share the Nobel prize in physics (more people are sharing it). We could speculate on what the reason for that could be. All that the data tells us that the effect is meaningful.
You can store the Merkle trees inside of a SQLite database as extra columns attached to the data.
That way you get the benefits of a high-level query language and a robust storage layer as well as the cryptographic verification.
In fact, there is a version control system called Fossil which does exactly that:
https://fossil-scm.org/home/doc/trunk/www/fossil-v-git.wiki
The baseline data structures for Fossil and Git are the same, modulo formatting details. Both systems manage adirected acyclic graph (DAG) of Merkle tree structured check-in objects. Check-ins are identified by a cryptographic hash of the check-in contents, and each check-in refers to its parent via the parent’s hash.
[…]
The SQL query capabilities of Fossil make it easier to track the changes for one particular file within a project. For example, you can easily find the complete edit history of this one document, or even the same history color-coded by committer, Both questions are simple SQL query in Fossil, with procedural code only being used to format the result for display. The same result could be obtained from Git, but because the data is in a key/value store, much more procedural code has to be written to walk the data and compute the result. And since that is a lot more work, the question is seldom asked.
The right balance on this is to set it up to only trim whitespace on lines that you have edited, and only on-save.
Emacs has ws-butler for that behavior: https://github.com/lewang/ws-butler
If undefined behavior is triggered anywhere in the program, then it is allowed by the standard for the process to ask the anthropomorphized compiler to punch you.
100% based and standards-compliant comic
Time to watch this gem again: https://youtu.be/b2F-DItXtZs
Small correction to an otherwise great explanation: SSNs are not recycled after death.
**Q20: *Are Social Security numbers reused after a person dies?*****A: No. We do not reassign a Social Security number (SSN) after the number holder’s death. Even though we have issued over 453 million SSNs so far, and we assign about 5 and one-half million new numbers a year, the current numbering system will provide us with enough new numbers for several generations into the future with no changes in the numbering system.
Yes, but it’s a prefix and can’t be used as a word on its own.
I am a native English speaker and I know it. It’s rare though.
Same meaning as in German and apparently we borrowed it from German.
If you’re going to store it for a few days, then it is best to use a recipe that makes concentrate, which you then dilute before drinking. It tends to hold up better.
For the first part, I was like, yeah, that’s pretty much how all C++ GUIs work: a markup file describes the structure, a source file controls the behavior, and a special compiler generates more C++ code based on the markup file to act as glue.
That’s all pretty standard, and it’s annoying, but I didn’t really get why they were making such a big deal out of it.
Missing documentation is also annoying but not uncommon for internal widgets.
What really elevates this from simply annoying to transcendentally bad, is the lack of error messages, the undocumented requirements that resource IDs be sequential, and the mandatory IDE plugin. That’s all unforgivable.
What you are looking for is some way to shortcut the process of learning to write an operating system by re-using your existing knowledge of Python.
(I’m not judging that; I understand why you want to do it)
The simple truth is that there is no way to do that. Any solution that involves using Python in a kernel would cost you more in terms of complexity and time than learning C would.
It is rarely worth it to use a language outside of the domains that it is normally used for.
I assume that they mean that OpenCL, which is a traditional GPGPU language, is a very restrictive subset of either C or C++ (both are options) plus some annotations.
In fact, OpenCL toolchains already use the Clang frontend and the LLVM backend, so the experience of using and compiling them is very close to C++.
The talk mentions all of this; it says that a benefit of using full C++ on the GPU over using OpenCL is that you don’t have to deal with all the annoying restrictions and annotations.
I received an actual email requesting a donation from the “Harris Victory Fund” two hours ago.
Here’s the fine print from the email on where the money would go:
The first $41,300/$15,000 from a person/multicandidate committee (“PAC”) will be allocated to the DNC. The next $3,300/$5,000 from a person/PAC will be allocated to Harris for President’s Recount Account. The next $510,000/$255,000 from a person/PAC will be split equally among the Democratic state parties from these states: AK, AL, AR, AZ, CA, CO, CT, DC, DE, FL, GA, HI, IA, ID, IL, IN, KS, KY, LA, MA, MD, ME, MI, MN, MO, MS, MT, NC, ND, NE, NH, NJ, NM, NV, NY, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VA, VT, WA, WI, WV, and WY. Any additional funds will be allocated to the DNC, subject to applicable contribution limits.
I appreciate this. It’s a good overview of what it means to be a productive part of a larger context.
I prefer the terms “throughput” for “worker productivity” and “latency” for “work-unit productivity” but I can see why they chose to use their terms.
deleted by creator
+1. If your library makes it impossible to recover from errors, then it will be unusable from any program that requires recovering from errors.