• 0 Posts
  • 234 Comments
Joined 5 months ago
cake
Cake day: March 31st, 2025

help-circle

  • Transformers in antennas are just transformers, but you have to use ceramic cores (ferrites) that would be right for your band. I think that what you might be trying to do would be wideband antenna of some sort, but for UHF which is likely in this case, I’d recommend you some kind of log-periodic antenna instead (it just works, directional) or some kind of spiral antenna (it just works, nondirectional). You can make both of these at home


  • I’m not a pharmacist, i said i work in pharma. Specifically, I’m in drug development. To simplify my job just for you, what I do is designing and making tools for biologists to do whatever they want at whatever protein they want. These tools get gradually improved in multiple ways, in ways that are hard to predict without testing, and in an ardous process that can easily take multiple years and immense capital, some of these (again, hard to predict in the beginning) can go to clinical trials, which take even more money, last couple more years and where 90% of these fail anyway. There’s no amount of pattern matching or simulations that can get you out of this problem, if you want to figure it out, you have to go to lab to get real world data, and if you don’t do this, you won’t figure it out ever.

    I’ve mentioned Alzheimers for a reason. Not only now there are hundreds of teams studying it, it has been the case for maybe 30 years and bit and in this entire time, progress in understanding mechanism of this disease has been abysmal and perhaps misguided in the first place - while popular amyloid hypothesis has been completely barren in terms of finding treatment. All interventions based on it tried in the past 25 years failed. Unless you count approval of drugs that don’t work as a sort of a success, then take it ig. It just seems that for all these years there are multiple pieces of the puzzle missing, and we might not even know what they might be. We don’t even precisely know what’s the role of amyloid plaque, whether it is cause of illness, result of adaptation to something else, inert side effect, or what. What we know is that it’s associated with disease, but also removing it with antibodies (these failed approved drugs) does nothing. This state might very well continue into the future, maybe for decades more, and we have no way of knowing either way. There are some other hypotheses tested, but again nothing will ever be known about them before someone gets any kind of result. Of course you don’t have to trust me, but consider what Derek Lowe has to say about it, as he’s been in this field for thirty years now and oversaw development of many pharmaceuticals.

    In terms of hypothetical extreme life extension, mental health would be pretty important because perspective of living for decades with incurable mental disorder would be downright miserable, but also things like Alzheimers would prevent that extreme lifetime extension in the first place. Also, I wanted to highlight how many gaps in knowledge are there in terms of neurobiology, which was talked about just two comments up, which would be pretty important if, maybe, someone wanted to make a silicon copy of human brain, perhaps,

    Depression in particular, there’s been a reasonable shot for a new mechanism a couple of years back, but it’s touching on new things, and at any rate we might have new pharmaceutical out of this perhaps in 15 years or so. Then, maybe it’ll turn out that it might be good for some kind of dementia or maybe autoimmune disease or something else, but before anyone tries that, and this is conditional on finding that pharmaceutical, it’s completely unknown what it might bring. Or it might just turn out to not work for some other obscure reason anyway

    Maybe for you perspective of solution to a problem being years or decades away in the best case, or maybe never, on the regular seems completely alien, but in many fields relying on real world data it’s everyday reality and with the kind of background that I have, your tech solutionism comes off as extremely arrogant and misguided. But if you want to listen to Kurzweilite drivel instead, i won’t stop you, have a nice day








  • funny thing that you say it, because my normal day job is in pharma. nobody serious tries to make people live forever, there’s enough real problems with treating and curing diseases as it is. we don’t know first thing about primary cause of alzheimers that would be actually useful in treating it or even from keeping it from getting worse; we might have just barely figured out, maybe, what is cause of depression; we are clueless about finer details of other mental diseases, and there’s dazzling array of thousands upon thousands of cancers and autoimmune and degenerative diseases, and if you wanted immortality, you’d have to figure it all, and make it work, and then some. no matter if you like it or not, people will keep dying, maybe slightly later and maybe after enjoying more years of healthy life, but it will still happen, as long as climate change doesn’t get too hard in the way that is, then even that won’t happen

    but also there are also grifters and fantasts and downright idiots who thought that their favourite scifi is a documentary and they do sincerely believe, or sell that belief, that cryonics or brain uploading or unrestricted use of magic pills or fusion with holy machine or variety of other overhyped bullshit is real and will save them, and they will become oligarchs eternal. this is especially true of current tech billionaires that grew on these scifi works and took them too seriously, and also have disposable money to be grifted from, and in particular peter thiel, who has downright pathological fear of death after being traumatized as a kid, when he was around a slave operated uranium mine in occupied Namibia that fueled South African nuclear weapons program (i’m not making this up, look this up on your own)

    immortality is maybe the last great promise of alchemy that wasn’t either solved by modern science or abandoned, and futurists and altmed and others will proudly carry this mantle, as long as cash flows that is


  • even LLM development, trying to copy the way brains work

    no, i’m gonna stop you right there. llms weren’t made to mimic human brain, or anything like this, llms were made as tools to study language. it’s categorically impossible for llms to provide anything leading to agi; these things don’t think, don’t research, don’t hallucinate, don’t have agency, cognition, don’t have working memory the way humans do; these things do one thing and one thing only: generate string of tokens that were most likely to follow given prompt, given what was in the training data. that’s it; that’s all that there’s to it; i know you were promised superhuman intelligence in a box but if you’re using a chatbot, all intelligence there is is your own; if you think otherwise you’re falling for massive ELIZA effect, a thing that has been around for fifty years now, augmented by blizzard of openai marketing propaganda, helped by tech journalists that never questioned these hypesters, funded by fake nerd billionaires of silicon valley that misremembered old scifi and went around building torment nexii, but i digress

    Where does intelligence come from? Can it be duplicated in other ways?

    i’m not saying that intelligence is exclusively always entirely biological thing, but i do think that state of neuroscience, psychology, and also computational side of research is woefully short of anything resembling pathway to solution to this problem. instead, this is what i think it’s going to happen:

    llms are dead end in this sense, but also these things take bulk of ai/ml funding now, so all these other approaches are ignored in terms of funding. historically, after every period of intense hype of this nature comes ai winter; this one is bound to happen too, and it might be worse since it looks like it also fueled investment bubble propping up large part of american economy, so when bubble pops, on top of historically usual negative sentiment stemming from overpromising and underdelivering there’s gonna be resentment about aibros worming their way to management and causing mass layoffs, replacing juniors with idiot boxes and lobotomizing any future seniors pipeline etc etc.

    what typically happened next is that steady supply of research in cs/math departments of many universities accumulated over low tens of years, and when some new good enough development happened, and everyone forgot previous failures, hype train starts again. this step will be slowed down by both current american administration cutting off funding to many types of research, and incoming bubble crash that will make people remember what kind of thing aibros are up to for a long time.

    when, not if, most credulous investors’ money including softbank thrown into openai gets burnt through, which i think might take couple of years tops, i would be very surprised if any of these overgrown startups doesn’t become a smoking crater within five years, very few people will want to have anything to do with this all, and when the next ai spring happens, it might be well into 40s, 50s, and by then i guess that climate change effects will be too strong to ignore and just try and catch another hype train, there are gonna be much more pressing issues. this is why i think that anything resembling agi won’t come up during my lifetime, and if you want to discuss gpt41 overlords in year 3107, feel free to discuss it with someone else.



  • fullsquare@awful.systemstoFuck AI@lemmy.worldGood old 2013...
    link
    fedilink
    arrow-up
    2
    arrow-down
    9
    ·
    edit-2
    2 days ago

    The concept of immortality has existed long before alchemy. Long before John Dee was born even.

    It’s a dream that people have been actively working on for centuries.

    While I will say that mercury pills are unlikely to get us there, immortality itself is not a pipe dream.

    Humans will not stop until we create an immortal being.




  • fullsquare@awful.systemstoFuck AI@lemmy.worldGood old 2013...
    link
    fedilink
    arrow-up
    13
    arrow-down
    24
    ·
    2 days ago

    AGI that quickly transitioned to ASI (since that’s theoretically what would happen once the first happens)

    extremely loud incorrect buzzer

    yeah according to people who also say that idiot plagiarism machines are gonna be machine gods one day, you will all see, and also coincidentally the same people who make them





  • you say STEM, but you seem to mean almost exclusively computer touchers, already mentioned biologists or variety of engineers won’t likely have these problems (i’m not gonna be excessively smug about this because my field will destroy you physically while still being STEM and not particularly glorious)

    also it’s not a complete jobocalypse, there’s still 93% employed fresh CS grads, they might have comparatively shittier jobs, but it’s not a disaster (unless picture is actually much bleaker in that that unemployment is, say, concentrated in last 2 years of graduates, but still even in this case it’s maybe 10%, 12% tops for the worst affected). unless you mean their unlimited libertarian flavoured greed coming through it, then yeah, it’s pretty funny

    even then, there’s gonna be a funny rebound when these all genai companies implode, partially maybe not in top earner countries, but places like eastern europe or india will fill that openai-sized crater pretty handily, if that mythical outsourcing to ai happened in the first place, that is