It's About Liberty: A Conservative Forum
Topics => Judiciary, Crime, & Courts => Topic started by: Libertas on November 25, 2024, 10:45:58 AM
-
It just makes poetic sense that socialist-run Minnesota with a radical woke AG hired a Stanford Prof "misinfo expert" was used to support a politically-weaponized radical Minnesota election misinfo legislation!
A Stanford 'misinformation specialist' who founded the university's Social Media Lab has been accused in a court filing of fabricating sources in an affidavit supporting new legislation in Minnesota which bans so-called 'election misinformation.'
For a $600 an hour expert witness fee, Stanford professor Jeff Hancock, whose biography claims he's "well-known for his research on how people use deception with technology," apparently used deception with technology by citing numerous academic works that do not appear to exist, the Minnesota Reformer reports.
At the behest of Minnesota Attorney General Keith Ellison, Hancock recently submitted an affidavit supporting new legislation that bans the use of so-called “deep fake” technology to influence an election. The law is being challenged in federal court by a conservative YouTuber and Republican state Rep. Mary Franson of Alexandria for violating First Amendment free speech protections.
Hancock’s expert declaration in support of the deep fake law cites numerous academic works. But several of those sources do not appear to exist, and the lawyers challenging the law say they appear to have been made up by artificial intelligence software like ChatGPT.
As an example, the declaration cites a study called "The Influence of Deepfake Videos on Political Attitudes and Behavior," which claims it was published in the Journal of Information Technology & Politics in 2023 - however there's no study by that name in said journal, and academic databases have no record of the study existing.
The specific journal pages referenced are from two completely different articles.
"The citation bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT," wrote the plaintiffs' attorneys. "Plaintiffs do not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question."
Libertarian law professor Eugene Volokh found another fake entry - a study titled "Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance," which doesn't appear to exist.
According to the Reformer, if the citations were fabricated by AI, Hancock's entire 12-page declaration may have been entirely cooked up.
According to Frank Bednarz, an attorney for the plaintiffs, those in support of the deep fake law in question have argued that "unlike other speech online, AI-generated content supposedly cannot be countered by fact-checks and education," however "by calling out the AI-generated fabrication to the court, we demonstrate that the best remedy for false speech remains true speech — not censorship."
https://www.zerohedge.com/political/bigwig-stanford-misinformation-expert-fabricates-evidence-using-ai-court-filing (https://www.zerohedge.com/political/bigwig-stanford-misinformation-expert-fabricates-evidence-using-ai-court-filing)
A liar, spreading lies about other people lying who were not lying but who were defamed by a liar to support legislation pushed by other liars to justify persecuting their political enemies...
If that is not a massive sign of a sick disturbed despotic bunch of low-life rogue scumbags then people are as blind as they are deaf and dumb!!!
-
"The citation bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT," wrote the plaintiffs' attorneys. "Plaintiffs do not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question."
‘hallucination,’ is actually the AI term of art for this, not a phrase created by an attorney.
-
GIGO
AI hallucination = made up bullsh*t based on biases built into it by corrupt biased humans/aka leftists.
-
I only read one thing on the hallucinations. They are complicated, caused in part by scarce information in an area? Some AI models make up facts on occasion. This is not bias. More like a brain fart.
-
Somebody programs the base programming... Didn't fricken achieve sentience, man!
And wasn't it ChatBot that told a guy last week he is useless and should die?
I'm not giving any passes...
Brainfart my ass!
-
I am not saying there is not bias. In the AI IMO it comes thorough the training material. You can ask the question about whites, blcks, jews and it will block some and refuse to answer.
From memory, sometimes using AI to transcribe voice can result in the AI making things up to fill in gaps. The word "hallucination" has a special meaning.
If you can't trust AI who can you trust? /s Here I ask chatGPT
An AI LLM (Large Language Model) hallucination refers to a situation where the model generates information that is false, inaccurate, or nonsensical, but it is presented confidently as if it were true. These hallucinations can occur in various ways, such as:
Fabricated Facts: The AI might produce details, statistics, or even names that don't exist or aren't backed by real data, but it presents them as legitimate.
Contradictions: It may give conflicting or illogical statements within a single response, which aren't based on reality.
Out-of-Context Responses: The AI can sometimes produce answers that seem completely unrelated to the input question, or include irrelevant details that are invented.
These hallucinations happen because LLMs, like GPT, are trained to predict the next word in a sequence based on patterns they’ve learned from vast amounts of text data. However, they don’t have true understanding or access to up-to-date or specific knowledge beyond what they were trained on. When they “hallucinate,” they are essentially generating text based on patterns without verifying facts or connecting them to a true context.
It's important to note that hallucinations are a known limitation of AI language models and are an active area of research in AI development to improve accuracy and reliability.