AI

  • Evolution

    Life forms are not the only things that undergo evolution, i.e. a change in characteristics over generations and/or time.

    This is also true of technology. One only has to compare and contrast one’s present operating system and its applications and what was in use twenty years ago, for example.

    That brings us neatly to the graphic below acquired from social media, which sarcastically tracks the progress of the trash icon on Microsoft’s Windows desktop over the decades.

    Evolution of the trash icon through various Windows releases and ending with the Copilot AI icon

    For those unfamiliar with the final icon on the bottom right, it’s that of MS’ Copilot generative artificial intelligence chatbot, which has not been without criticism. Indeed, one Microsoft Tech Community post has even called the software a “frustrating flop in AI-powered productivity“.

    More specifically, the post states:

    Here’s the problem: when you ask Copilot to alter a document, modify an Excel file, or adjust a PowerPoint presentation, it’s practically useless. Instead of performing the tasks as requested, it often leaves you hanging with vague suggestions or instructions. Users don’t want to be told how to perform a task—they want it done. This is what an AI assistant should do: execute commands efficiently, not just offer advice.

    If the MS tech enthusiasts are less than impressed, your ‘umble scribe need say no more!

  • Australia initiates consumer protection proceedings against Microsoft

    Microsoft 365 ;ogoThe Australian Competition & Consumer Commission, which is responsible for ensuring individuals and businesses comply with Australian competition and consumer protection laws has announced that it has initiated proceedings in the Federal Court against Microsoft Australia and its parent company Microsoft Corporation for allegedly misleading approximately 2.7 million Australian customers when giving notice of subscription options and price increases after it integrated its Copilot AI assistant into Microsoft 365 subscription plans.

    The ACCC alleges that since 31st October 2024, Microsoft has told Microsoft 365 Personal and Family plan subscribers with auto-renewal enabled that to maintain their subscription, they must accept the integration of Copilot and pay higher prices for their plan, or else cancel their subscriptions.

    Copilot logoFollowing the integration of Copilot, the annual subscription price of the Microsoft 365 Personal plan increased by an eye-watering 45% from $109 to $159. In contrast, the annual subscription price for the Microsoft 365 Family plan increased by a mere 29% from $139 to $179.

    Microsoft’s communications with subscribers did not refer to the existence of the “Classic” plans without Copilot integration and the only way subscribers could access them was to begin the process of cancelling their subscriptions. This involved accessing the subscriptions section of their Microsoft account and selecting “Cancel subscription”. It was only on the following page that subscribers were given the option to move to the Classic plan instead.

    MS cancel subscription screenshot
    Click on the image for the full-sized version

    Maximum penalties

    Should Microsoft lose the case, the maximum penalty for each breach of the Australian Consumer Law is the greater of the following three options:

    • $50 million;
    • three times the total benefits that have been obtained and are reasonably attributable; or
    • if the total value of the benefits cannot be determined, 30 per cent of the corporation’s adjusted turnover during the breach turnover period.

    For Microsoft Australia, not to mention its parent company, the emphasis in the word consumer is firmly on its first three-letter syllable.

  • Grammer, AI style

    From your ‘umble scribe’s social media timeline.

    Social media post by @prettybbuckley reading well no over an image featuring the text but truly wasn’t sure how., which Grammarly AI has suggested should be corrected to was trulyn't [sic]

    We all occasionally need help with English grammar, even we pensioners who have spent decades working at linguists, but the above ‘suggestion‘ from Grammarly could be diplomatically described as unhelpful.

    According to Wikipedia, “Grammarly is an American English language writing assistant software tool. It reviews the spelling, grammar, and tone of a piece of writing“, as well as being a tool for detecting plagiarism.

    On its own website, Grammarly is described as ‘Grammarly, the trusted AI assistant for everyday communication‘.

    On the basis of the above howler, your correspondent would not trust it to write out the alphabet in the correct sequence.

  • New abbreviation in the wild

    Ever since it was first coined in 2002, the online world has benefited from the creation of the slang abbreviation TL;DR, i.e. too long; don’t read, indicating that a body of text is not worth one’s while to read.

    The abbreviation is used in both upper and lower case versions.

    As the Wikipedia entry states, ‘TL;DR is commonly used in online discussions, comment sections, and social media posts. Writers often employ the acronym to summarize a preceding lengthy text, allowing readers who prefer brevity to quickly understand the main point. Conversely, readers might use TL;DR as a critique, signaling that a text was excessively verbose or lacked clarity‘.

    Official recognition of the abbreviation came some 11 years after its first appearance, as Wikipedia explains.

    In August 2013, TL;DR was officially added to Oxford Dictionaries Online, recognizing its widespread use in digital communications. Merriam-Webster also documented the term, noting its establishment as part of modern digital lexicon.

    Human brain made out of electrical circuits denoting artificial intelligence. Image courtesy of Wikimedia Commons.TL;DR has now been joined by another new slang abbreviation, AI:DR. This denotes that the text in question has been produced by generative AI, that environmentally costly means of producing low quality output (affectionately known as slop. Ed.) without human intervention.

    There is already speculation that AI is being deployed in regional newspaper offices in titles owned by Reach plc, but that would prove difficult to verify as the quality of their content starts from a very low base anyway. 😀

    Your correspondent trusts that readers and the wider public will not be shy in using this new abbreviation accordingly.

  • Reform VE Day cock-up

    Politicians and thus by implication the parties to which they owe their allegiance have a relationship with the truth that can be described diplomatically as problematic and at worst non-existent.

    Yesterday’s VE Day 80th anniversary provided ample proof of both.

    Perhaps the most egregious example of being less than truthful was posted on the X/Twitter account of Reform UK Ltd., a limited company masquerading as a political party, the latest incarnation of the cult dedicated to the promotion and adoration of a perennial charlatan commonly known as Nigel Paul Farage, who also just happens to be a great pal of the disgraced former 45th president and current disgraceful 47th president of the United States of America, insurrectionist, convicted felon, adjudicated sexual predator, business fraudster, congenital liar and golf cheat commonly known as Donald John Trump, who is on a one-man mission to Make America Grate Again (or something similar. Ed.). Whether Farage’s adoration is reciprocated in equal measure is not known. given The Felon‘s disposable relationships with the rest of the human race.

    What the Farage Fan Club posted on X/Twitter is shown below.

    Image from Reform UK VE Day post with Spitfire on right and HMS Vanguard on the left.
    There’ll be bullshit over the white cliffs of Dover…

    So far, so patriotic at a first casual glance – the white cliffs, a union flag, Churchill at Buckingham Palace, a Royal Navy ship is glorious monochrome and any reference to the Brits in WW2 would be incomplete without a Spitfire.

    However, this impression is debunked the closer one looks at the image. We’ll come back to the Spitfire, but first let’s look at that naval vessel used by Reform’s graphics whizz-kids. A special research technique otherwise known as using a search engine for five minutes reveals it is in fact HMS Vanguard, the last battleship ever to be built. Although her construction started during the war, she did not enter into naval service until 1946, i.e. after VE Day and even VJ Day, which marked the final official end of hostilities.

    The vessel’s identity can be easily confirmed by looking at one of the photographs used by Wikipedia; the motor launch can be seen in both the image on Wikipedia and in the much edited image used by Reform.

    "HMS
    HMS Vanguard. Image courtesy of Wikimedia Commons.

    Now let’s return to that Spitfire. Doing an image search for a Spitfire with the roundel identification AI yields not a single result; and your ‘umble scribe can find no reference anywhere to an actual squadron with that identification. Besides that, a search for AI or A1 Spitfire squadron identifications yielded zero results. Furthermore, the aircraft itself has a very glossy appearance, a feature that was shared by all the Spitfire images produced by generative AI that came up in your correspondent’s search. The conclusion I’ve reached is that the Spitfire exists solely in digital format and has never actually flown, let alone seen active service. This seems very reminiscent of the dishonesty shown by the equally right-wing British National Party, who attracted much derision in 2009 when in a supposedly patriotic poster where they used a Spitfire flown by a Polish squadron.

    A final reminder to Farage and his fellow travellers; your side surrendered unconditionally on 8th August 1945.

  • Generative AI art – the ultimate comment

    The photograph below showed up in your ‘umble scribe’s social media feed today and is the best comment he has seen to date in respect of the so-called “AI slop” (or just plain “slop”. Ed.) produced by generative AI engines.

    Message reads Say no to generative AI art. Buy art from a real degenerate.

    However, your ‘umble scribe is not the only person who questions the utility of generative AI, let alone its accuracy and/or faithfulness. Here’s the historian Dan Snow of History Hit clearly suffering from the output of generative AI in respect of great events in world history Whoever would have guessed the USA had ships in the thick of the Battle of Trafalgar? :-D.

  • OpenAI, an irony-free company

    AI, we keep being told is the next big thing in the wonderful world of information technology. So far most AIs out in the wild have been developed at great expense and require vast amounts of electricity to work.

    Until now.

    DeepSeek logoIn the last week or so the AI world has been shaken by the latest version of DeepSeek, an AI developed by the Chinese.

    The latest version of Deepseek (R1) provides responses comparable to other contemporary LLMs, such OpenAI’s GPT-4o and o1 despite being trained at a significantly lower cost—stated at US$6 mn. compared with $100 mn. for OpenAI’s GPT-4 in 2023. Furthermore, Deepseek only requires one-tenth of the computing power of a comparable LLM. This caused a 17% drop in the share price of Nvidia, the main supplier of AI hardware.

    However, DeepSeek is not without its limitations. As The Guardian found out, the DeepSeek chatbot becomes very taciturn and tongue-tied when asked questions which the Chinese government finds sensitive. When asked the following questions, the AI assistant responded: “Sorry, that’s beyond my current scope. Let’s talk about something else.”

    In addition, DeepSeek and other Chinese generative AI must not contain content that violates the country’s “core socialist values”, that “incites to subvert state power and overthrow the socialist system” or “endangers national security and interests and damages the national image”.

    Besides its reluctance to answer questions the Chinese government doesn’t like, there’s another problem for DeepSeek – plagiarism.

    OpenAI logoThe BBC reports that OpenAI, the maker of ChatGPT, has accused DeepSeek and others using its work to make rapid advances in developing their own AI tools.

    The fact that OpenAI is accusing others of plagiarising its work shows the company does not understand or admit either irony or hypocrisy as the company’s own LLM has been trained to some extent on material that infringes others’ copyright. The use of copyrighted materials for training LLMs is a topic that has also exercised German-speaking literary translators (posts passim).

    Some companies clearly think ethics is a county with a speech defect in south-east England and that all is fair not just in love and war, but in business too.

  • Torvalds ignores AI hype

    Linus Torvalds headshotLinus Torvalds, the creator and lead developer of the Linux kernel, has been speaking about Artificial Intelligence, according to The Register; and he’s not impressed by what he has witnessed to date.

    Speaking at the Open Source Summit in Vienna last month, Torvalds was asked for his views on modern technologies, specifically Generative Artificial Intelligence, usually abbreviated to GenAI.

    His reply included the following remarks:

    “I think AI is really interesting and I think it is going to change the world and at the same time I hate the hype cycle so much that I really don’t want to go there, so my approach to AI right now is I will basically ignore it.

    I think the whole tech industry around AI is in a very bad position and it’s 90 percent marketing and ten percent reality and in five years things will change and at that point we’ll see what of the AI is getting used for real workloads.

    His remarks about the hype cycle are particularly relevant. Those with very long memories will remember the Dot-com bubble of the late 1990s and early two thousands, while those with less of a broad sweep of time may recall the more recent episode of overwhelming enthusiasm generated by the marketing of so-called cloud computing, about which the FSFE was particularly blunt in its opinion (posts passim) – just other people’s computers.

  • US firm fined by Dutch for illegal facial recognition data gathering

    Autoriteit Persoonsgegevens logoThe Dutch Autoriteit Persoonsgegevens (Personal Data Protection Authority) has announced today that it has imposed a fine of €30.5 mn. on the US company Clearwiew AI, as well as a non-compliance penalty in excess of €5 mn.

    Stylised facial recognitionClearview is an American company that offers facial recognition services, which has, inter alia, built up an illegal database with billions of photos of faces, including those of Dutch citizens. Furthermore, the authority has warned that using the services of Clearview is also prohibited.

    Clearview offers facial recognition services to intelligence and investigative services. Moreover, Clearview customers can provide camera images to find out the identity of people shown in the images. To this end, Clearview has a database with more than 30 billion photos of people, which it has scraped automatically from the internet and then converted into a unique biometric code per face, all without the knowledge and consent of its victims.

    According to the authority’s chair Aleid Wolfsen, “Facial recognition is a highly intrusive technology, that you cannot simply unleash on anyone in the world. If there is a photo of you on the internet – and doesn’t that apply to all of us? – then you can end up in the database of Clearview and be tracked. This is not a doom scenario from a scary film. Nor is it something that could only be done in China. This really shouldn’t go any further. We have to draw a very clear line at incorrect use of this sort of technology.’

    Clearview says that it provides services to intelligence and investigative services outside the European Union (EU) only.

    Clearwiew’s services illegal and in breach of the the GDPR

    Clearview has seriously violated the privacy law General Data Protection Regulation (GDPR) on several points: the company should never have built the database and is insufficiently transparent. It should never have built the database with photos, the unique biometric codes and other information linked to them. This especially applies to the codes. Like fingerprints, these are biometric data. Collecting and using them is prohibited. There are some statutory exceptions to this prohibition, but Clearview cannot rely on them.

    Clearview is an American company without an established presence n Europe. Other data protection authorities have already fined Clearview on various earlier occasions, but the company has not changed its conduct. For this reason the Dutch regulator is investigating ways to ensure the violations stop, including whether the company’s directors can be held personally liable for data protection violations.

    Wolfsen: ‘Such [a] company cannot continue to violate the rights of Europeans and get away with it. Certainly not in this serious manner and on this massive scale. We are now going to investigate if we can hold the management of the company personally liable and fine them for directing those violations. That liability already exists if directors know that the GDPR is being violated, have the authority to stop that, but omit to do so, and in this way consciously accept those violations.’

    Clearview has not objected to the decision and is therefore unable to appeal against the fine.

  • German literary translators want strict AI regulation

    German news site heise reports that German-speaking literary translators associations are demanding stricter regulation of Artificial Intelligence (usually abbreviated to AI) due to its threats to art and literature.

    “Art and democracy too are being threatened.” This is being said by German-speaking literary translators associations in Germany, Austria and Switzerland in respect of the increasing “automation of intellectual work and human speech“. They have therefore collaborated on an open letter (PDF) in which they are demanding strict regulation of AI.

    Artificial Intelligence graphic combining a human brain schematic with a circuit board.
    Image courtesy of Wikimedia Commons.

    Regulation must ensure that the functionality of generative AI and its training data must be disclosed. Furthermore, AI providers must clearly state which copyrighted works they have used for training. No works should be used for this purpose against the wishes of copyright holders. In addition to this, the open letter states that copyright holders should be paid if AI is trained using their works and a labelling requirement should be introduced for 100% AI content.

    “Human language is being simulated now”

    “A translation is the result of an individual interaction with an original work”, the translators write. This interaction must take place responsibly. How a sentence is constructed and what attention is focused on guides the inner experience of readers. The language skills needed for this are developed and honed in the active writing process. “The creation of a new literary text in another language makes the translator a copyright holder of a new work.”

    However, text-generating AI can only simulate human speech, according to the translators. “They have neither thoughts, emotions nor aesthetic sense, know no truth, have no knowledge of the world and no reasons for translation decisions.” Due to their design, language simulations are often illogical and full of gaps; they contain substitute terms and statements that are not always immediately recognised as incorrect.

    “AI multiplies prejudices”

    The translators continue by saying that When AI products are advertised, it is suggested that the AI can work independently, “understand” and “learn”. “This means the huge amounts of human work on which the supposedly ‘intelligent’ products are based are kept secret.” Millions of copyrighted works are ‘scraped‘ from illegally established internet libraries to create language bots.

    These and other arguments are combined in a “Manifesto for Human Language”. In it the translators write as follows: “Bot language only ever reproduces the status quo. It multiplies prejudices, inhibits creativity, the dynamic development of languages and the acquisition of language skills.” Text-generating AI is attempting to make human and machine language indistinguishable. It is not designed as a tool, but a replacement for human skill.

    However, AI is not intelligence at all, since this also includes emotional, moral, social and aesthetic intelligence, practical sense and experience which arise from physicality and action. “In this respect, the technical development of language bots cannot be termed ‘progress'”, the open letter states.

Posts navigation