Worth Reading – Deepfakes: A Problem In Search Of A Problem?
If you ask a group of legal professionals if they’ve seen deepfakes be an issue in court, and they all say no, is there no issue, or did they fall for the fakes?
If you ask a group of legal professionals if they’ve seen deepfakes be an issue in court, and they all say no, is there no issue, or did they fall for the fakes?
No one gave them permission to create an AI character based on themselves. They just fed some of that person’s writing into the LLM and let it do its thing. There was, apparently, no thought given to whether having a Stephen King-based analyzer would violate the use of his name or if building the LLM would violate his copyright.
So I ask again: why are we listening to the people who have the most to gain by getting everyone to buy AI tools, instead of making our own decisions about how quickly we should move forward with AI? Governance exists to slow things down – forcing people to think before they run off and do something disastrous.
Should we design better governance to address rapidly changing technology? Absolutely. Should we let Big Tech determine how we redesign it? I don’t think so.
I can see why publicly proclaiming that you’re being innovative with new technology to reduce your headcount is a better alternative to admitting your firm isn’t doing well or to publicly blaming US policy. It might not be the whole truth, though. Usually, when there are large layoffs, the truth is a questionable idea anyway. AI just created another way to stretch it. In this case, Baker McKenzie may see a path in which new technology reduces the need for 10% of its staff. They may also be using that excuse to cover up failure, too.
Either way, 700 people are out of a job, and it’s become so routine that I fear it no longer raises an eyebrow. That’s the truly scary part.
I think we can agree that granting someone full access to the open internet without education or tools to protect themselves would be dangerous, no?
OK, but what is a general-purpose LLM but a collection of everything that the model could ingest, without rules about what was safe and what wasn’t?
Yet we expect people to use them, and aren’t making any effort to make them safer.
The article above, however, makes it clear that our brains take shortcuts to make quick decisions. In doing so, the number of times we see something that isn’t true can impact whether we treat it as false or true. They say familiarity breeds contempt when it comes to other people, but maybe familiarity with shared information breeds acceptance, regardless of the truth.
That is frightening in a world where tens of thousands of posts can be created in minutes.