Thursday, March 23, 2023

, it's the end of objective facts Mark Sumner, author by Mark Sumner Daily Kos Staff Wednesday, March 22, 2023 at 5:00:22p EDT 191 Comments 191 NEW Recommend Story 366 no image description available An AI video surveillance camera recognizes people and shows the age and type of clothing. March 1, 2023 In 2016, Oxford Languages’ word of the year was both absolutely definitive of the previous year and, unfortunately, all too appropriate for every year since: Post-truth: an adjective defined as ‘relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.’ Top Stories Ukraine Update: Russia only has one army, Bakhmut may be the place where they lose itDeSantis: On second thought, Putin 'is a war criminal'NC approves Medicaid expansion, reversing longstanding Republican oppositionMichigan GOP tweets offensive Holocaust myth: Defies backlash from its own membersExpect 'fireworks and posturing on both sides' at House TikTok hearing CLICK TO READ Arizona court declines most of Kari Lake's appeal over governor's race CLICK TO READ Arizona court declines mostof Kari Lake's appeal over governor's race The effects of this “post-truth” were vividly demonstrated at the end of the year when thousands of Russian pitchbots, millions of conspiracy theorists, and every media outlet from The New York Times to the Associated Press wringing their hands over non-existent crimes by Hillary Clinton—with a big boost from Mr. James “Emotions Over Facts” Comey—handed the White House to Donald Trump. The erosion of objective fact was already scary heading into 2016, well before Kellyanne Conway uttered the immortal phrase “alterative facts.” It became positively deadly heading into the pandemic where people would indulge in everything from raiding veterinary shelves for worming paste, to refusing a vaccine that cut the odds of death by 14x. They’re still doing it. In fact, there is now what seems to be a compulsive effort to find a set of emotion-backed lies that will ward off any intrusion by facts. And now the new explosion in AI is here to help. Not help sort out the facts. Help make it even more difficult to see through the lies and distortions. The explosion of AI over the last year has been centered on something called “large language models.” Rather than neutral networks laboriously taught a series of rules for dealing with a situation, these are programs that are instead fed large data sets from which they determine the rules in the form of a statistical model. They learn how to respond to the idea of a “cute kitten” or a “nuclear threat” in the same way they learn the rules of grammar—by looking at thousands, or millions, or thousands of millions, of examples. One of the things that these models have learned consistently, is how to tell a convincing lie. For an example, look at this article from Scholarly Kitchen in which ChatGPT was asked about the connection between tweets and scientific citations. Human: Do tweets increase citations to scientific articles? ChatGPT: There is evidence to suggest that tweets can increase citations to scientific articles. Studies have shown that social media, specifically Twitter, can be an effective way to disseminate scientific research and increase its visibility among researchers and the general public. A study by the American Association for the Advancement of Science (AAAS) found that articles that were tweeted about had an average of 9% more citations than articles that were not tweeted about. In this case, the person asking the question happened to be an expert who had done research on exactly this area. He not only knew the answer was incorrect, he was able to use his contacts to establish that there was no such study. But how many people would have known this? How many would have accepted this at face value, given that ChatGPT didn’t just make a claim but backed it with what it claimed was a study in perhaps the most reputable of peer-reviewed publications? Examples like this have been repeated on topic after topic. In teaching these models from the internet, they’ve learned to make information appear authoritative, even when it’s pulled from a digital arse. Now, add this article from The Verge, and you get a glimpse at how bad this is going to get. Right now, if you ask Microsoft’s Bing chatbot if Google’s Bard chatbot has been shut down, it says yes, citing as evidence a news article that discusses a tweet in which a user asked Bard when it would be shut down and Bard said it already had, itself citing a comment from Hacker News in which someone joked about this happening, and someone else used ChatGPT to write fake news coverage about the event. This is an AI, citing a lie created by another AI, which was citing an article from another AI, based on something that the last AI was instructed to write based on a joke. If you went onto the internet right now, asked a question, and got an answer that included a citation, that included a citation, that included a citation … how far would you really try to unravel things to determine if you were being told the truth? Finally consider that Google, Microsoft, and seemingly every other company on the planet is, at this moment, rushing to add these large language-model-based AIs to everything. Every search engine. Every word processor. Every email app. Every slide deck, spreadsheet, and note taker. Every one of them is going to offer to put things together, provide a summary, add some context, write a few paragraphs … and at every level, they will wrap another layer of obfuscation, false authority, and just plain-vanilla lies. The surge of these AIs isn’t just a threat to anyone who works with images, words, or numbers—which is, essentially, everyone—it’s a loaded bomb ready to blow away our last concept of a shared reality in which there are an agreed upon set of authoritative facts. And our grip on that reality was already slipping. Thank you for your support of Daily Kos. Media outlets shed 28,637 jobs last year alone. That's not just journalists, but fact-checkers, editors, and administrative staff. Advertising dollars have been gobbled up by Big Tech and cash-strapped outlets are being bought and sold to the highest bidder. Daily Kos is different. We don't have corporate backers that are only in it for the bottom line. We are primarily accountable to readers like you because donations from readers like you are our biggest source of income.Want to keep it that way? Click here to donate now. 6 TAGS #AI #bard #Bing #Google #Microsoft #chatgpt 1 GROUP REPUBLISHED Daily Kos Was this story worth reading? Recommending stories helps us decide which stories are most important to show our readers. Recommend Story 366 189Comments Join the Discussion What Is The Best Stock of 2023? (SCE May Have Found It) Wall Street is Shocked: Small Cap Exclusive Puts Their Reputation On The Line with A Risky Move. SCE stated, " We may have uncovered the most exciting stock of 2023." Click to learn more Small Cap Exclusive | Sponsored Read More DC Homeowners, Your Garage Will Never Look The Same Thanks To This 1-Day Transformation Renuity Garage Coating | Sponsored The Actual Cost Of Window Replacement In 2023 May Surprise You Enter your ZIP code and share the number of windows you need to replace. HomeBuddy | Sponsored Get Quote Nailable Solar Shingles Sweep The Nation gosolarshingle.com | Sponsored Learn More Here's What Gutter Guards Should Cost You In 2023 The Actual Cost of Gutter Guards in 2023 May Surprise You LeafFilter Partner | Sponsored Learn More © Kos Media, LLC Site content may be used for any purpose without explicit permission unless otherwise specified. "Kos" and "Daily Kos" are registered trademarks of Kos Media, LLC. Privacy Policy DAILY KOS Front Page Elections Radio Comics RSS ABOUT Terms Rules of the Road DMCA Copyright Notice Do Not Sell My Info Privacy About Us Contact Us Press ADVERTISING Advertising Overview CAREERS Jobs MERCHANDISE Shirts GET OUT THE VOTE Ways to get involved in the upcoming elections Daily Kos moves in solidarity with the Black community. LEARN MORE

No comments:

Post a Comment