FWIW, I don't think Judge Kemp is a transactivist; I think he is an old school sexist, or MCP as we used to call them. Amazing how such men have so much in common with transactivists, isn't it?! Replete also, in the passage about lipstick and speaking in modulated feminine tones signalling woman, with that faint whiff of gampiness.
Anyone who cares about the rule of law should be deeply troubled by the use of AI in producing judgments. If this had been a precedent-setting court the implications would be serious, and if it were a lower profile case it’s highly unlikely anybody would have even noticed.
Your talk of drafting opinions for judges (this is common in American courts, too) reminds me that in Cyril Hare’s 1942 murder mystery TRAGEDY AT LAW, one character is male Judge who has married a woman with brilliant legal mind, and she writes his opinions as a barrister and his Judgments as a Judge, so that “On one occasion, when one of these was the subject of an appeal, the sotto voce question of one Lord Justice to his brother, "Is this one of Hilda's?" had unluckily reached some quick ears in counsels' seats.”
It is very disappointing, Helen! I read quite a few of the research papers into 'ai' published this year, and have tried out Google Gemini Pro (this is much better than Flash, the free option currently ruining Google Search results) as it is part of my paid subscription to Gmail/Google Drive. I personally rate it as the most reliable 'ai' for research work, slightly more reliable than Grok (which I haven't personally used as I get a lot more from Google for my subscription than I would get on X).
Google never bought into the whole 'ai consciousness' debacle, and has continually criticised the 'deep minds' of all other computer science companies. Their policy has always been 'tool first' and Gemini Pro is nowhere near as sycophantic or biased as other LLMs like ChatGPT. It does make some category errors, however, you just need to use clear boolean search terms in your initial prompt.
The main flaw in Google Gemini at this current stage is it's preference for American English. Even with explicit prompting to follow British English conventions, it will consistently fail in these tasks until Google set up a new parallel 'British English' module in their post-training tweaks. I imagine it will likely end up as a separate model that users can pick to use.
The main issue with 'ai' remains the massive amounts of energy waste in Western 'ai' systems. Chinese 'ai' doesn't have this problem as it uses analogue chips instead of digital chips. Texas Instruments has been developing analog chips as well, but they don't seem to be penetrating the US market...
Since I posted this reply, my access to Substack over Firefox has been a bit patchy... I'm going to switch over to Brave and see if that solves the problem... Substack is still working fine on my phone...
Troubleshooting complete! I am now back on my Windows 10 operating system and Substack is working pretty well (at least, as well as it ever has, it is still super buggy) again!
You do realise that he is talking about something quite different to AI in relation to analogue chips?
AI, including Chinese AI, uses "digital" chips. All of it, all of the time, all of the place. His point is that China has an edge in real world interaction and that this (amongst other things) compensates for its lag in actual AI.
But I think he is pretty far off track on that point (I have not evaluated his other points so much but I think the cost one is better). Real-world interest actions generally require 1) far greater compression (which AFAIK is a purely "digital" thing always and everywhere), and, contrary to his argument, 2) processing speed (which is partly a function of 1 but also an independent variable).
I do, indeed, realise that. For me as an artist who doesn't get that much out of 'ai', the problem with the current system in America is that it is obviously based off massive, and increasingly provable, intellectual property theft as well as being horribly energy inefficient.
Now, of course, China likely has the same problem with intellectual property theft, but, unlike, America they are making real strides in improving energy efficiency. So, again, I have read a lot about 'ai' even though I don't really care for it or use it very much myself.
The American systems are currently horribly unwieldy, use way too much power, and will never be free of 'confabulations' as no system ever will...
Personally, I would like to see all organisations in the world making their own 'ai models' based on their own legally acquired data sets. They could then build their own systems using whatever chip systems they can afford, instead of paying off intellectual property thieves. If everyone made their own 'ai' from their legal corporate property, I think the world would be a much happier place.
I think you should vary your sources. As it is I generally disagree on the IP theft point, albeit with reservations for some cases. One such example is the Chinese models are literally trained on the outputs of the American models!
I disagree about Chinese energy efficiency (which is _not_ about cost.
And I disagree with last two paragraphs in their entirety. Most fundamentally, anyone who wants to can train an AI on anything they want to, but no proprietary data set has enough data and the more you pay for the chips the faster you can do it.
Fair enough, we are all entitled to our own opinions based on our own research... I don't really have a horse in the 'ai race' so my research has mostly focused on the best possible implementations of the technology in my home nation of New Zealand.
Horrifying. And it's going to get much, much worse. At the highest and lowest levels.
I'm not sanguine that we'll have-forgive me-"access to reality" at all within five years. We won't know what's real, and worse, we won't *know how to know*.
its misnaming the group 'NotAllGays' as 'NotForGays' which i actually find most alarming. That feels like the kind of in-house in-joke a transactivist might have inserted to smugly and righteously damn the group, with the intention of removing it before publication... and they forgot. It feels like a glimpse of a mindset of those who prepared the judgment, because someone made that change consciously.
I'm aware of journalists doing similar things for people they dislike, and it accidentally making it through to publication.
Most of the AI training, unfortunately, happened during the height of "woke", or pomo-addlement as I prefer to call it, and I fear it's going to take an awful long time to get the resulting homophobia and sexism out.
Much of the text reads as if it has been dictated into a computer, as it has the texture of the spoken rather than written word. It helps explain the prolixity and redundancy. A computer could easily mistake "NotAllGays" for "NotForGays" particularly if the speech was very rapid or indistinctly articulated.
As an editor, I have my own nightmare stories about AI ….
Out of curiosity, I searched your ‘hallucinated quotation’ (you probably did this, too) and AI revealed it to be an almost exact rendering of a quotation by John Adams, taken from a letter to his wife. Who knows? I didn’t look any further. Where AI is concerned, my position is ‘life’s too short….’
My main concern was establishing whether Jonathan Swift said it anywhere, even if not in the book I was reviewing. I’d already inserted the proper quotation from the book into my piece and filed by that point. I’m currently reasonably satisfied it’s a hallucinated quotation—at least as far as Swift is concerned.
Yes, of course. I was just bemused that, in your search, it (or something similar) was attributed to Jonathan Swift and, in mine, to John Adams. Maybe any Joe, John, Jonathan or Joseph will do , when it comes to AI :)
It is frankly incomprehensible to me that a judge would do this. I suppose I am a control freak, and I would very rarely use AI for anything. If I were a judge, I would never use it for a judgment, particularly not a high profile decision that people might rake over. I just can’t even fathom this. To use hallucinated quotes is just extraordinary. I am deeply concerned by this - as another commenter has noted this has grave ramifications for the rule of law and confidence in the judiciary.
I think people commenting here may be missing the point, this was _not_ too much AI but too much Judge Kemp.
A series of AI agents, properly configured to do basic tasks like double-check quotes, would be quite unlikely to make these mistakes.
At the risk of giving many here conniptions, I think we should have an AI "assistant judge", perhaps serving a role similar to civil law Advocate Generals, on at least intermediate appellate courts.
This appears to be at least the third known case of 'AI' content appearing in a British court, but the first where the judge either didn't notice or didn't call it out. I doubt a judge who takes longhand notes would use AI to write their judgement. I wonder if an assistant was asked to summarise relevant case law, and used a large language model for the task.
I think I like your writing -- style and content -- but I have to read more to make sure because I've done a few "down on drivel masters" Substack's and I don't want to be suckered in again. Although at this point I could safely say, so far, I really like your drivel. (You're forcing me to distinguish between wordiness and drivelness, something I don't really have time to do, but ... we'll see.)
FWIW, I don't think Judge Kemp is a transactivist; I think he is an old school sexist, or MCP as we used to call them. Amazing how such men have so much in common with transactivists, isn't it?! Replete also, in the passage about lipstick and speaking in modulated feminine tones signalling woman, with that faint whiff of gampiness.
The "fragrance" of Mary Archer comes to mind
Anyone who cares about the rule of law should be deeply troubled by the use of AI in producing judgments. If this had been a precedent-setting court the implications would be serious, and if it were a lower profile case it’s highly unlikely anybody would have even noticed.
This reminds me of that old, in Internet time, "quote".
"Don't believe everything you read on the Internet" - Benjamin Franklin
Your talk of drafting opinions for judges (this is common in American courts, too) reminds me that in Cyril Hare’s 1942 murder mystery TRAGEDY AT LAW, one character is male Judge who has married a woman with brilliant legal mind, and she writes his opinions as a barrister and his Judgments as a Judge, so that “On one occasion, when one of these was the subject of an appeal, the sotto voce question of one Lord Justice to his brother, "Is this one of Hilda's?" had unluckily reached some quick ears in counsels' seats.”
It is very disappointing, Helen! I read quite a few of the research papers into 'ai' published this year, and have tried out Google Gemini Pro (this is much better than Flash, the free option currently ruining Google Search results) as it is part of my paid subscription to Gmail/Google Drive. I personally rate it as the most reliable 'ai' for research work, slightly more reliable than Grok (which I haven't personally used as I get a lot more from Google for my subscription than I would get on X).
Google never bought into the whole 'ai consciousness' debacle, and has continually criticised the 'deep minds' of all other computer science companies. Their policy has always been 'tool first' and Gemini Pro is nowhere near as sycophantic or biased as other LLMs like ChatGPT. It does make some category errors, however, you just need to use clear boolean search terms in your initial prompt.
The main flaw in Google Gemini at this current stage is it's preference for American English. Even with explicit prompting to follow British English conventions, it will consistently fail in these tasks until Google set up a new parallel 'British English' module in their post-training tweaks. I imagine it will likely end up as a separate model that users can pick to use.
The main issue with 'ai' remains the massive amounts of energy waste in Western 'ai' systems. Chinese 'ai' doesn't have this problem as it uses analogue chips instead of digital chips. Texas Instruments has been developing analog chips as well, but they don't seem to be penetrating the US market...
Since I posted this reply, my access to Substack over Firefox has been a bit patchy... I'm going to switch over to Brave and see if that solves the problem... Substack is still working fine on my phone...
Seems to be an Internet service provider issue... My connection on my phone is still working but I can't access Substack from my computer right now!
Troubleshooting complete! I am now back on my Windows 10 operating system and Substack is working pretty well (at least, as well as it ever has, it is still super buggy) again!
Analogue chips????????
Yeah, check our Dr. Warwick Powell of Australia for more info: https://warwickpowell.substack.com/p/the-tortoise-and-the-hare-in-ai
You do realise that he is talking about something quite different to AI in relation to analogue chips?
AI, including Chinese AI, uses "digital" chips. All of it, all of the time, all of the place. His point is that China has an edge in real world interaction and that this (amongst other things) compensates for its lag in actual AI.
But I think he is pretty far off track on that point (I have not evaluated his other points so much but I think the cost one is better). Real-world interest actions generally require 1) far greater compression (which AFAIK is a purely "digital" thing always and everywhere), and, contrary to his argument, 2) processing speed (which is partly a function of 1 but also an independent variable).
I do, indeed, realise that. For me as an artist who doesn't get that much out of 'ai', the problem with the current system in America is that it is obviously based off massive, and increasingly provable, intellectual property theft as well as being horribly energy inefficient.
Now, of course, China likely has the same problem with intellectual property theft, but, unlike, America they are making real strides in improving energy efficiency. So, again, I have read a lot about 'ai' even though I don't really care for it or use it very much myself.
The American systems are currently horribly unwieldy, use way too much power, and will never be free of 'confabulations' as no system ever will...
Personally, I would like to see all organisations in the world making their own 'ai models' based on their own legally acquired data sets. They could then build their own systems using whatever chip systems they can afford, instead of paying off intellectual property thieves. If everyone made their own 'ai' from their legal corporate property, I think the world would be a much happier place.
I think you should vary your sources. As it is I generally disagree on the IP theft point, albeit with reservations for some cases. One such example is the Chinese models are literally trained on the outputs of the American models!
I disagree about Chinese energy efficiency (which is _not_ about cost.
And I disagree with last two paragraphs in their entirety. Most fundamentally, anyone who wants to can train an AI on anything they want to, but no proprietary data set has enough data and the more you pay for the chips the faster you can do it.
Fair enough, we are all entitled to our own opinions based on our own research... I don't really have a horse in the 'ai race' so my research has mostly focused on the best possible implementations of the technology in my home nation of New Zealand.
Horrifying. And it's going to get much, much worse. At the highest and lowest levels.
I'm not sanguine that we'll have-forgive me-"access to reality" at all within five years. We won't know what's real, and worse, we won't *know how to know*.
AI juries will be next, assuming juries still exist. See my thoughts at the end of this article.
https://www.carolinajournal.com/unc-law-holds-mock-ai-jury/
its misnaming the group 'NotAllGays' as 'NotForGays' which i actually find most alarming. That feels like the kind of in-house in-joke a transactivist might have inserted to smugly and righteously damn the group, with the intention of removing it before publication... and they forgot. It feels like a glimpse of a mindset of those who prepared the judgment, because someone made that change consciously.
I'm aware of journalists doing similar things for people they dislike, and it accidentally making it through to publication.
Most of the AI training, unfortunately, happened during the height of "woke", or pomo-addlement as I prefer to call it, and I fear it's going to take an awful long time to get the resulting homophobia and sexism out.
Much of the text reads as if it has been dictated into a computer, as it has the texture of the spoken rather than written word. It helps explain the prolixity and redundancy. A computer could easily mistake "NotAllGays" for "NotForGays" particularly if the speech was very rapid or indistinctly articulated.
As an editor, I have my own nightmare stories about AI ….
Out of curiosity, I searched your ‘hallucinated quotation’ (you probably did this, too) and AI revealed it to be an almost exact rendering of a quotation by John Adams, taken from a letter to his wife. Who knows? I didn’t look any further. Where AI is concerned, my position is ‘life’s too short….’
My main concern was establishing whether Jonathan Swift said it anywhere, even if not in the book I was reviewing. I’d already inserted the proper quotation from the book into my piece and filed by that point. I’m currently reasonably satisfied it’s a hallucinated quotation—at least as far as Swift is concerned.
Yes, of course. I was just bemused that, in your search, it (or something similar) was attributed to Jonathan Swift and, in mine, to John Adams. Maybe any Joe, John, Jonathan or Joseph will do , when it comes to AI :)
Blackstone in one word;
Property.
It therefore follows that the only question ever is what or who is property, and Whom is owner?
If you answer that you have the truth regardless of law. Law mere words.
Property endures.
Property is real.
It is frankly incomprehensible to me that a judge would do this. I suppose I am a control freak, and I would very rarely use AI for anything. If I were a judge, I would never use it for a judgment, particularly not a high profile decision that people might rake over. I just can’t even fathom this. To use hallucinated quotes is just extraordinary. I am deeply concerned by this - as another commenter has noted this has grave ramifications for the rule of law and confidence in the judiciary.
I think people commenting here may be missing the point, this was _not_ too much AI but too much Judge Kemp.
A series of AI agents, properly configured to do basic tasks like double-check quotes, would be quite unlikely to make these mistakes.
At the risk of giving many here conniptions, I think we should have an AI "assistant judge", perhaps serving a role similar to civil law Advocate Generals, on at least intermediate appellate courts.
This appears to be at least the third known case of 'AI' content appearing in a British court, but the first where the judge either didn't notice or didn't call it out. I doubt a judge who takes longhand notes would use AI to write their judgement. I wonder if an assistant was asked to summarise relevant case law, and used a large language model for the task.
As a lay person supporting a defendant AI has been a huge help to understand the convoluted process of the law. Without it I’d be floundering!
I think I like your writing -- style and content -- but I have to read more to make sure because I've done a few "down on drivel masters" Substack's and I don't want to be suckered in again. Although at this point I could safely say, so far, I really like your drivel. (You're forcing me to distinguish between wordiness and drivelness, something I don't really have time to do, but ... we'll see.)