Run Over By the Omnibus

[Santa or the Grinch?]

For more on that, see the WSJ editorialists this morn: The Ugliest Omnibus Bill Ever

The 117th Congress has been the most spendthrift in history, and this week it plans to go out with one final bipartisan back-slapping hurrah—a 4,155-page omnibus spending bill that is the worst in history. This is no way to govern in a democracy, but here we are.

The Members, in their efforts to disguise what they’re doing, rolled out the final product late Monday night. They plan to whip it through by Thursday while Americans are busy with pre-Christmas plans and before even the Members know what they’re voting on.

Democrats failed in their duty to pass normal spending bills, so they are using this omnibus to finance all of government with $1.65 trillion for fiscal 2023. But wait, it’s worse. Congress is also adding major policy changes many of which deserve separate votes or couldn’t pass by themselves—from healthcare to presidential election rules to regulation of the beauty industry (see nearby).

Meanwhile, our state's senior senator is pretty proud of her contribution toward this monument to fiscal irresponsibility:

As is our state's junior (and just re-elected) senator:

There is virtually no chance these ladies will be held responsible for their roles in this profligacy.

Briefly noted:

  • I was a physics major way back when, so my ears tend to prick up when that field gets a mention. Here's a don't-know-whether-to-laugh-or-cry article from Andrew Follett about hijinx in the Great White North: Canadian Government Funds Activist Academics Declaring War on Physics.

    Canada’s government granted a group of academics almost $164,000 for a research project called “Decolonizing Light: Tracing and countering colonialism in contemporary physics,” a search of grant records confirmed.

    Disturbingly, the academics involved admit that they have zero interest in performing science or seeking truth but are instead interested in spreading woke ideology. “The purpose of our project is not to find new or better explanations of light; we are not seeking to improve scientific ‘truth,’” scholars involved in the project wrote in one of their few published works. “Rather, our project initiatives are motivated by the marginalization of women, Black people, and Indigenous peoples particularly in physics.”

    Our usual "which is worse" puzzlement applies: do these people know they are incoherent grifters, or are they seriously deluded?

  • But speaking of incoherent grifters, our current favorite physics professor at the University Near Here, Chanda Prescod-Weinstein, (CPW) comes in for some rough treatment from Jerry Coyne: The controversy continues about naming the Webb Space Telescope; the woke won’t give up in the face of the facts. His article quotes extensively from a New York Times article (How Naming the James Webb Telescope Turned Into a Fight Over Homophobia) that explores the nasty history of that spat. Sample, after noting the complete lack of evidence of James Webb's alleged homophobia:

    You’d think that would end the kvetching, right? WRONG! People who argued that Webb was a homophobe didn’t change their tune in light of the multiple studies showing they were wrong. Instead, led by the notoriously woke physicist and activist Chanda Prescod-Weinstein, a professor at the University of New Hampshire and an activist who doesn’t miss a chance to parade her intersectional victim status (see below), they simply recalibrated their claims, saying that Webb should have stood up to the government. She and her colleagues had written several pieces objecting to the naming of the JWST on the grounds that Webb was a homophobe.

    You'll remember the Canadians above who were "not seeking to improve scientific ‘truth'"? Coyne recalls CPW's 2017 Slate article: Stop Equating “Science” With Truth. It's a thing, and the University Near Here is all over it.

  • I've seen my last dead-trees issue of WIRED magazine, but they still allow me on the website. So I found this article from Anil Seth ("professor of cognitive and computational neuroscience at the University of Sussex") kind of puzzling: Conscious Machines May Never Be Possible. Why not?

    In June 2022, a Google engineer named Blake Lemoine became convinced that the AI program he’d been working on—LaMDA—had developed not only intelligence but also consciousness. LaMDA is an example of a “large language model” that can engage in surprisingly fluent text-based conversations. When the engineer asked, “When do you first think you got a soul?” LaMDA replied, “It was a gradual change. When I first became self-aware, I didn’t have a sense of soul at all. It developed over the years that I’ve been alive.” For leaking his conversations and his conclusions, Lemoine was quickly placed on administrative leave.

    The AI community was largely united in dismissing Lemoine’s beliefs. LaMDA, the consensus held, doesn’t feel anything, understand anything, have any conscious thoughts or any subjective experiences whatsoever. Programs like LaMDA are extremely impressive pattern-recognition systems, which, when trained on vast swathes of the internet, are able to predict what sequences of words might serve as appropriate responses to any given prompt. They do this very well, and they will keep improving. However, they are no more conscious than a pocket calculator.

    Why can we be sure about this? In the case of LaMDA, it doesn’t take much probing to reveal that the program has no insight into the meaning of the phrases it comes up with. When asked “What makes you happy?” it gave the response “Spending time with friends and family” even though it doesn’t have any friends or family. These words—like all its words—are mindless, experience-less statistical pattern matches. Nothing more.

    Uh, fine.

    But I'm not sure, if you accept the deterministic and materialistic bases of modern science, how you could convincingly argue against the possibility of machine consciousness. There doesn't appear to be anything supernatural about human consciousness, after all: it seems to be an phenomenon emerging from our sufficiently complex nervous systems. Why shouldn't that same thing emerge from a sufficiently complex array of logic gates and programming? I don't see that Anil Seth answers that.

Last Modified 2024-01-30 7:11 AM EDT