The lack of liability for all middleman platforms is at the heart. If Facebook were liable for the posts it bumps via its algorithm that are slanderous or dangerous, and if Amazon were liable for fraudulent products, it would go a long way towards making our world less shitty.
As for your friend, it wouldn’t seem hard to write an AI agent to auto-submit pull-down requests.
John Oliver has been critical about AI on Last Week Tonight. He's commented that it is "stupid in ways we cannot predict." He's also talked about "AI Slop" flooding the internet, being the new spam. And that much of the content is coming from countries where small payments are more meaningful.
On the issue of Axon Enterprise, Oliver's previous commentary does not put them in a good light. Issues include the myth of the Taser being non-lethal, false claims about their effective rates, and reports of a cult-like atmosphere where employees felt pressured to be tased and tattooed with company logos. He expressed concern about Axon's growing influence over law enforcement policy.
The country needs a comprehensive federal law regarding transparency. California has an AI transparency act coming into effect in January. Colorado's is in February. There are many more legislative bills in the works on various aspects from deepfakes & synthetic media to consumer protection & transparency to workplace surveillance & management, and more... Unfortunately, with a patchwork of laws coming into place, there will be ways for the laws to be circumvented.
Until there is a coherent federal system of laws in place, platforms and tech companies will find it acceptable to treat AI harms as an acceptable cost of doing business. They will only do the minimum necessary to comply with the patchwork or regulations instead of investing in comprehensive solutions. This leave the issue with the general public to put pressure on the platforms and on government to act.
The very first chess book I bought, some 24 years ago, was a very cheap paperback, by “Gary Kaspartov”. If you know the GM’s proper name spelling, you can see where this is going. I didn’t know better at the time.
The book was filled with made-up chess theory; most of it very wrong. I was so embarrassed by having bought it without even realizing the name misspelling (or checking its contents) that I threw it in the garbage bin.
All of this to say that this type of low-quality knock-offs were already common way before the time of AI. I can only assume it’s getting worse and worse now.
The AI-book knockoffs remind me of straight to VHS classics, The Little Panda Fighter, Ratatoing, or the even better Asylum movies, such as Transmorphers, Pirates of Treasure Island, The Da Vinci Treasure, Snakes on a Train, Atlantic Rim. Once my grandmother forgot her glasses and ended up buying The Age of the Hobbits for us ^^
Wow. While fake-books is nowhere near the societal problem that fake-police-reports, still...as a self-publishing writer: ouch. I had no idea. Thank you for publishing this.
Measuring authenticity in a world on the fly, information flying by without deep scrutiny for the faithful trust necessary to accept or deny, perhaps gives a new meaning to the Lords of the Flies. (pun intended)
Much of the discussion around AI reminds me of early print. Scribes were out of work, spurious and illegal printings were published, and established forms of authority were challenged. I don't think there's much possibility that AI can be effectively regulated through a top-down approach, but, perhaps, one way of mitigating cut corners is to establish literacy programs for AI, similar to how we do for writing through a mandatory English curriculum.
This is a common experience, but the points I’m making in this have very little to do with the common anti-AI arguments. These are cases of people using generative AI to commit fraud or produce unreliable documents that get used in legal proceedings.
Maybe I misunderstood, but I didn't take your post as an anti-AI argument. I'm assuming there's an implicit call to action in it (I may be wrong), and I guess if you want to address "Ezekiel Thompson" and Axon, or deal with things on a case-by-case or industry-by-industry basis, then it makes sense to focus on the fraudsters, but the cutting corners you describe behind AI fraud and unreliable documents in legal proceedings seem to me an issue that can be resolved only through the large-scale institutionalizing of programs that change the way people think about AI.
With you 100%.
The lack of liability for all middleman platforms is at the heart. If Facebook were liable for the posts it bumps via its algorithm that are slanderous or dangerous, and if Amazon were liable for fraudulent products, it would go a long way towards making our world less shitty.
As for your friend, it wouldn’t seem hard to write an AI agent to auto-submit pull-down requests.
John Oliver has been critical about AI on Last Week Tonight. He's commented that it is "stupid in ways we cannot predict." He's also talked about "AI Slop" flooding the internet, being the new spam. And that much of the content is coming from countries where small payments are more meaningful.
On the issue of Axon Enterprise, Oliver's previous commentary does not put them in a good light. Issues include the myth of the Taser being non-lethal, false claims about their effective rates, and reports of a cult-like atmosphere where employees felt pressured to be tased and tattooed with company logos. He expressed concern about Axon's growing influence over law enforcement policy.
The country needs a comprehensive federal law regarding transparency. California has an AI transparency act coming into effect in January. Colorado's is in February. There are many more legislative bills in the works on various aspects from deepfakes & synthetic media to consumer protection & transparency to workplace surveillance & management, and more... Unfortunately, with a patchwork of laws coming into place, there will be ways for the laws to be circumvented.
Until there is a coherent federal system of laws in place, platforms and tech companies will find it acceptable to treat AI harms as an acceptable cost of doing business. They will only do the minimum necessary to comply with the patchwork or regulations instead of investing in comprehensive solutions. This leave the issue with the general public to put pressure on the platforms and on government to act.
we are continuing to enter deeper into the cave of lies, and everything is fake. (yes, there was an intended inference to the recent reading of Plato)
buyer beware.
hone those critical thinking skills.
you're gonna need them .....in spades!
The very first chess book I bought, some 24 years ago, was a very cheap paperback, by “Gary Kaspartov”. If you know the GM’s proper name spelling, you can see where this is going. I didn’t know better at the time.
The book was filled with made-up chess theory; most of it very wrong. I was so embarrassed by having bought it without even realizing the name misspelling (or checking its contents) that I threw it in the garbage bin.
All of this to say that this type of low-quality knock-offs were already common way before the time of AI. I can only assume it’s getting worse and worse now.
The AI-book knockoffs remind me of straight to VHS classics, The Little Panda Fighter, Ratatoing, or the even better Asylum movies, such as Transmorphers, Pirates of Treasure Island, The Da Vinci Treasure, Snakes on a Train, Atlantic Rim. Once my grandmother forgot her glasses and ended up buying The Age of the Hobbits for us ^^
Unplug it before it becomes self-aware. We’ve all seen Terminator enough times to know the path we’re on…
Wow. While fake-books is nowhere near the societal problem that fake-police-reports, still...as a self-publishing writer: ouch. I had no idea. Thank you for publishing this.
Measuring authenticity in a world on the fly, information flying by without deep scrutiny for the faithful trust necessary to accept or deny, perhaps gives a new meaning to the Lords of the Flies. (pun intended)
Trust, but verify.
Much of the discussion around AI reminds me of early print. Scribes were out of work, spurious and illegal printings were published, and established forms of authority were challenged. I don't think there's much possibility that AI can be effectively regulated through a top-down approach, but, perhaps, one way of mitigating cut corners is to establish literacy programs for AI, similar to how we do for writing through a mandatory English curriculum.
This is a common experience, but the points I’m making in this have very little to do with the common anti-AI arguments. These are cases of people using generative AI to commit fraud or produce unreliable documents that get used in legal proceedings.
Maybe I misunderstood, but I didn't take your post as an anti-AI argument. I'm assuming there's an implicit call to action in it (I may be wrong), and I guess if you want to address "Ezekiel Thompson" and Axon, or deal with things on a case-by-case or industry-by-industry basis, then it makes sense to focus on the fraudsters, but the cutting corners you describe behind AI fraud and unreliable documents in legal proceedings seem to me an issue that can be resolved only through the large-scale institutionalizing of programs that change the way people think about AI.