Arvind's Newsletter

Issue No #730

1.When Reliance group enters a new industry it creates tremors in the industry, as it always looks to disrupt incumbent players and gain market leadership. In this context its entry into financial services with Jio Financial Services is eagerly anticipated (though probably with more than a tinge of concern by incumbents like Bajaj Financial Services as well large private sector banks like ICICI bank and HDFC bank). Lets watch this space as the conglomerate is planning a listing as early as October according to a report in Economic Times.

“Reliance Industries, controlled by billionaire Mukesh Ambani, is in talks with Indian regulators to secure the necessary approvals for the market debut of Jio Financial Services Ltd. in Mumbai. The parent is holding a meeting of shareholders and creditors on May 2 to vote on the plan to spin off and list the unit, according to an exchange filing in March.

Jio Financial “will be a technology-led business, delivering financial products digitally by leveraging the nationwide omni-channel presence of Reliance’s consumer businesses,” Ambani said in a statement last year while announcing the spinoff.

Reliance appointed K. V. Kamath as the non-executive chairman of Jio Financial in November. It has tapped Hitesh Sethia, a top executive from McLaren Strategic Ventures, as the unit’s chief executive officer, Bloomberg News reported last month.

2.ASML, Europe’s Most Valuable Tech Firm, Is at the Heart of the US-China Chip War reports Bloomberg.The low-profile Dutch firm has become crucial to a half-trillion-dollar global industry. Excerpts from this report :

“Over four decades, ASML has gone from a bit player competing with the likes of Nikon, Canon and Ultratech to the world's only maker of very high-end semiconductor lithography equipment. Its ascent has made it Europe's most valuable technology company, with a market capitalisation of over $247 billion -- more than twice that of its customer Intel. In an industry where devices typically cost $10 million, ASML commands about $180 million for its current top-end machine. And although the chip market has softened recently, ASML is still growing and its long-term outlook seems intact, thanks to the insatiable demand for computing power.”

"This is a company that the world can't exist without," said Jon Bathgate, a fund manager at NZS Capital in Denver, which has about $2 billion under management, with ASML as one of its biggest holdings. "They've got a 20-year head start... Investors have clearly realised how important ASML is as a company and how difficult it would be to replicate. It's a natural monopoly with secular growth winds. That's unique." As chips become for geopolitics in the 21st century what oil was in the last one, ASML's singular success has thrust it squarely in the crosshairs of the intensifying tensions between the US and China. With the US focused on the strategic importance of semiconductors, Presidents Donald Trump and Joe Biden have done everything to ensure that China is a couple of generations behind in chips. No company is more critical to that effort than ASML.

3.Witness the cool musical game played by Inuit women for centuries

In traditional katajjaq, also known as Inuit throat singing, two women stand face to face and perform a duet that doubles as something of a musical battle. Chanting in rhythm, participants attempt to outlast one another, each waiting for any crack in the pace of her opponent – whether in the form of loss of breath, fatigue or laughter. Dating back many hundreds of years, the practice was originally a pastime for women while men were away on hunts.

Throat Singing in Kangirsuk, which screened at the 2019 Sundance Film Festival, has been part of this katajjaq revival. Produced by the Canada-based First Nations film initiative Wapikoni Mobile, the short features Manon Chamberland and Eva Kaukai, two teenage throat singers from the remote Inuit village of Kangirsuk in northern Québec, facing off in a friendly katajjaq duel. It is amazing.

4.Apple has released a collection of Steve Jobs' speeches, interviews, and emails, all put into a book, which is totally free. Make Something Wonderful: Steve Jobs in his own words. It is a Must Read.

5.Yuval Noah Harari argues that AI has hacked the operating system of human civilisation, in his thought provoking opinion piece for the Economist. Storytelling computers will change the course of human history, says the historian and philosopher. Worth reading in full.

“Fears of artificial intelligence (AI) have haunted humanity since the very beginning of the computer age. Hitherto these fears focused on machines using physical means to kill, enslave or replace people. But over the past couple of years new AI tools have emerged that threaten the survival of human civilisation from an unexpected direction. AI has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. ai has thereby hacked the operating system of our civilisation.

Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our DNA. Rather, they are cultural artefacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artefacts we created by inventing myths and writing scriptures.

Money, too, is a cultural artefact. Banknotes are just colourful pieces of paper, and at present more than 90% of money is not even banknotes—it is just digital information in computers. What gives money value is the stories that bankers, finance ministers and cryptocurrency gurus tell us about it. Sam Bankman-Fried, Elizabeth Holmes and Bernie Madoff were not particularly good at creating real value, but they were all extremely capable storytellers.

What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think about ChatGPT and other new ai tools, they are often drawn to examples like school children using ai to write their essays. What will happen to the school system when kids do that? But this kind of question misses the big picture. Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of AI tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults.

In recent years the qAnon cult has coalesced around anonymous online messages, known as “q drops”. Followers collected, revered and interpreted these q drops as a sacred text. While to the best of our knowledge all previous q drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon that might be a reality.

On a more prosaic level, we might soon find ourselves conducting lengthy online discussions about abortion, climate change or the Russian invasion of Ukraine with entities that we think are humans—but are actually AI. The catch is that it is utterly pointless for us to spend time trying to change the declared opinions of an AI bot, while the AI could hone its messages so precisely that it stands a good chance of influencing us.

Through its mastery of language, AI could even form intimate relationships with people, and use the power of intimacy to change our opinions and worldviews. Although there is no indication that AI has any consciousness or feelings of its own, to foster fake intimacy with humans it is enough if the AI can make them feel emotionally attached to it. In June 2022 Blake Lemoine, a Google engineer, publicly claimed that the AI chatbot Lamda, on which he was working, had become sentient. The controversial claim cost him his job. The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the ai chatbot. If AI can influence people to risk their jobs for it, what else could it induce them to do?

In a political battle for minds and hearts, intimacy is the most efficient weapon, and ai has just gained the ability to mass-produce intimate relationships with millions of people. We all know that over the past decade social media has become a battleground for controlling human attention. With the new generation of AI, the battlefront is shifting from attention to intimacy. What will happen to human society and human psychology as AI fights AI in a battle to fake intimate relationships with us, which can then be used to convince us to vote for particular politicians or buy particular products?

Even without creating “fake intimacy”, the new AI tools would have an immense influence on our opinions and worldviews. People may come to use a single AI adviser as a one-stop, all-knowing oracle. No wonder Google is terrified. Why bother searching, when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can just ask the oracle to tell me the latest news? And what’s the purpose of advertisements, when I can just ask the oracle to tell me what to buy?

And even these scenarios don’t really capture the big picture. What we are talking about is potentially the end of human history. Not the end of history, just the end of its human-dominated part. History is the interaction between biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process through which laws and religions shape food and sex.

What will happen to the course of history when AI takes over culture, and begins producing stories, melodies, laws and religions? Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. ai is fundamentally different. ai can create completely new ideas, completely new culture.

At first, AI will probably imitate the human prototypes that it was trained on in its infancy. But with each passing year, AI culture will boldly go where no human has gone before. For millennia human beings have lived inside the dreams of other humans. In the coming decades we might find ourselves living inside the dreams of an alien intelligence.

Fear of AI has haunted humankind for only the past few decades. But for thousands of years humans have been haunted by a much deeper fear. We have always appreciated the power of stories and images to manipulate our minds and to create illusions. Consequently, since ancient times humans have feared being trapped in a world of illusions.

In the 17th century René Descartes feared that perhaps a malicious demon was trapping him inside a world of illusions, creating everything he saw and heard. In ancient Greece Plato told the famous Allegory of the Cave, in which a group of people are chained inside a cave all their lives, facing a blank wall. A screen. On that screen they see projected various shadows. The prisoners mistake the illusions they see there for reality.

In ancient India Buddhist and Hindu sages pointed out that all humans lived trapped inside Maya—the world of illusions. What we normally take to be reality is often just fictions in our own minds. People may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion.

The AI revolution is bringing us face to face with Descartes’ demon, with Plato’s cave, with the Maya. If we are not careful, we might be trapped behind a curtain of illusions, which we could not tear away—or even realise is there.

Of course, the new power of ai could be used for good purposes as well. I won’t dwell on this, because the people who develop ai talk about it enough. The job of historians and philosophers like myself is to point out the dangers. But certainly, ai can help us in countless ways, from finding new cures for cancer to discovering solutions to the ecological crisis. The question we face is how to make sure the new ai tools are used for good rather than for ill. To do that, we first need to appreciate the true capabilities of these tools.

Since 1945 we have known that nuclear technology could generate cheap energy for the benefit of humans—but could also physically destroy human civilisation. We therefore reshaped the entire international order to protect humanity, and to make sure nuclear technology was used primarily for good. We now have to grapple with a new weapon of mass destruction that can annihilate our mental and social world.

We can still regulate the new AI tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, AI can make exponentially more powerful AI. The first crucial step is to demand rigorous safety checks before powerful ai tools are released into the public domain. Just as a pharmaceutical company cannot release new drugs before testing both their short-term and long-term side-effects, so tech companies shouldn’t release new ai tools before they are made safe. We need an equivalent of the Food and Drug Administration for new technology, and we need it yesterday.

Won’t slowing down public deployments of AI cause democracies to lag behind more ruthless authoritarian regimes? Just the opposite. Unregulated AI deployments would create social chaos, which would benefit autocrats and ruin democracies. Democracy is a conversation, and conversations rely on language. When AI hacks language, it could destroy our ability to have meaningful conversations, thereby destroying democracy.

We have just encountered an alien intelligence, here on Earth. We don’t know much about it, except that it might destroy our civilisation. We should put a halt to the irresponsible deployment of AI tools in the public sphere, and regulate AI before it regulates us. And the first regulation I would suggest is to make it mandatory for AI to disclose that it is an AI. If I am having a conversation with someone, and I cannot tell whether it is a human or an AI—that’s the end of democracy.

This text has been generated by a human.

Or has it?

Yuval Noah Harari is a historian, philosopher and author of “Sapiens”, “Homo Deus” and the children’s series “Unstoppable Us”. He is a lecturer in the Hebrew University of Jerusalem’s history department and co-founder of Sapienship, a social-impact company.

_______________