Arvind's Newsletter -weekend edition

Issue No #752

1.Nvidia, the leading chipmaker when it comes to AI stuns markets and signals how artificial intelligence could reshape technology sector.

Interest in AI drove shares in the chipmaker Nvidia up 24% yesterday, pushing it close to a $1 trillion market capitalisation. Nvidia is the main supplier of chips used by AI companies such as OpenAI.

Meanwhile, a new superbug-killing antibiotic was discovered using artificial intelligence. Abaucin destroys the bacterium Acinetobacter baumannii, which is resistant to most existing antibiotics and designated a “critical” threat by the World Health Organization. The AI studied existing antibiotics, then was given a list of 6,000 other compounds and told to find some that could attack A. baumannii. In 90 minutes it returned a shortlist which scientists tested. Scientific discovery is one of AI’s most exciting applications, a DeepMind researcher told Flagship recently.

2.Chinese hackers were blamed for a series of intrusions in the U.S. and Kenya. The “Five Eyes” Western intelligence partners and Microsoft warned that a state-sponsored Chinese group had been spying on American critical infrastructure for nearly two years, while Reuters reported that Chinese hackers targeted the Kenyan government apparently to garner information on debt Nairobi owed to Beijing. China denied both claims.
Microsoft said it’s found malicious activity by a Chinese-state sponsored hacking group that has stealthily gained access into critical infrastructure organisations in Guam and elsewhere in the US, with the likely aim of disrupting critical communications in the event of a war.
Over the past decade, China reorganised its hacking operations, turning into a sophisticated and mature adversary

3.Plastics are in our air, food, and water. A reckoning is coming–and smart businesses can see it writes Paul Polman, former Chairman of Unilever and Author of Net Positive, in this piece for Fortune magazine.
“You can’t fix a plastics crisis with a regulatory mess. Nor can we build sustainable growth on the “take, make, waste” model of consumption that we have normalised in recent years. An effective and enforceable set of global rules and responsibilities will be vastly better for everyone. Today business leaders have the opportunity to shape these very rules. If we miss it, our plastics problems will get even worse, governments will be scrambling for a solution, and business likely won’t get the chance to shape it.”

4.To be successful, you need to fail 16% of the time. Take a hint from Einstein and Mozart — unplug and make peace with some degree of failure writes Adam Alter, in this piece, excerpted from his new book- Anatomy of a Breakthrough-How to get unstuck when it matters most.
Einstein and Mozart were massively productive because they understood the value of easing back and chilling out.

Modern theories of learning say that success is impossible without some degree of failure.

Aim for the Goldilocks zone when setting a failure rate: roughly 16 percent.

5, Another major tech player with stake in AI - Microsoft asks for AI rules to minimise risks. The question is “Why tech giants want to strangle AI with red tape? “ The answer, says Schumpeter of the Economist, is that that they want to hold back open source competitors.

One of the joys of writing about business is that rare moment when you realise conventions are shifting in front of you. It brings a shiver down the spine. Vaingloriously, you start scribbling down every detail of your surroundings, as if you are drafting the opening lines of a bestseller. It happened to your columnist recently in San Francisco, sitting in the pristine offices of Anthropic, a darling of the artificial-intelligence (ai) scene. When Jack Clark, one of Anthropic’s co-founders, drew an analogy between the Baruch Plan, a (failed) effort in 1946 to put the world’s atomic weapons under un control, and the need for global co-ordination to prevent the proliferation of harmful ai, there was that old familiar tingle. When entrepreneurs compare their creations, even tangentially, to nuclear bombs, it feels like a turning point.

Since ChatGPT burst onto the scene late last year there has been no shortage of angst about the existential risks posed by ai. But this is different. Listen to some of the field’s pioneers and they are less worried about a dystopian future when machines outthink humans, and more about the dangers lurking within the stuff they are making now. ChatGPT is an example of “generative” ai, which creates humanlike content based on its analysis of texts, images and sounds on the internet. Sam Altman, CEO of OpenAI, the startup that built it, told a congressional hearing this month that regulatory intervention is critical to manage the risks of the increasingly powerful “large language models” (LLMs) behind the bots.

In the absence of rules, some of his counterparts in San Francisco say they have already set up back channels with government officials in Washington, DC, to discuss the potential harms discovered while examining their chatbots. These include toxic material, such as racism, and dangerous capabilities, like child-grooming or bomb-making. Mustafa Suleyman, co-founder of Inflection ai (and board member of The Economist’s parent company), plans in coming weeks to offer generous bounties to hackers who can discover vulnerabilities in his firm’s digital talking companion, Pi.

Such caution makes this incipient tech boom look different from the past—at least on the surface. As usual, venture capital is rolling in. But unlike the “move fast and break things” approach of yesteryear, many of the startup pitches now are first and foremost about safety. The old Silicon Valley adage about regulation—that it is better to ask for forgiveness than permission—has been jettisoned. Startups such as OpenAI, Anthropic and Inflection are so keen to convey the idea that they won’t sacrifice safety just to make money that they have put in place corporate structures that constrain profit-maximisation.

Another way in which this boom looks different is that the startups building their proprietary LLMs aren’t aiming to overturn the existing big-tech hierarchy. In fact they may help consolidate it. That is because their relationships with the tech giants leading in the race for generative AI are symbiotic. OpenAI is joined at the hip to Microsoft, a big investor that uses the former’s technology to improve its software and search products. Alphabet’s Google has a sizeable stake in Anthropic; on May 23rd the startup announced its latest funding round of $450m, which included more investment from the tech giant. Making their business ties even tighter, the young firms rely on big tech’s cloud-computing platforms to train their models on oceans of data, which enable the chatbots to behave like human interlocutors.

Like the startups, Microsoft and Google are keen to show they take safety seriously—even as they battle each other fiercely in the chatbot race. They, too, argue that new rules are needed and that international co-operation on overseeing LLMs is essential. As Alphabet’s CEO, Sundar Pichai, put it, “ai is too important not to regulate, and too important not to regulate well.”

Such overtures may be perfectly justified by the risks of misinformation, electoral manipulation, terrorism, job disruption and other potential hazards that increasingly powerful ai models may spawn. Yet it is worth bearing in mind that regulation will also bring benefits to the tech giants. That is because it tends to reinforce existing market structures, creating costs that incumbents find easiest to bear, and raising barriers to entry.

Such overtures may be perfectly justified by the risks of misinformation, electoral manipulation, terrorism, job disruption and other potential hazards that increasingly powerful ai models may spawn. Yet it is worth bearing in mind that regulation will also bring benefits to the tech giants. That is because it tends to reinforce existing market structures, creating costs that incumbents find easiest to bear, and raising barriers to entry.

This is important. If big tech uses regulation to fortify its position at the commanding heights of generative AI, there is a trade-off. The giants are more likely to deploy the technology to make their existing products better than to replace them altogether. They will seek to protect their core businesses (enterprise software in Microsoft’s case and search in Google’s). Instead of ushering in an era of Schumpeterian creative destruction, it will serve as a reminder that large incumbents currently control the innovation process—what some call “creative accumulation”. The technology may end up being less revolutionary than it could be.

LLaMA on the loose

Such an outcome is not a foregone conclusion. One of the wild cards is open-source AI, which has proliferated since March when llama, the LLM developed by Meta, leaked online. Already the buzz in Silicon Valley is that open-source developers are able to build generative-ai models that are almost as good as the existing proprietary ones, and hundredths of the cost.

Anthropic’s Mr Clark describes open-source ai as a “very troubling concept”. Though it is a good way of speeding up innovation, it is also inherently hard to control, whether in the hands of a hostile state or a 17-year-old ransomware-maker. Such concerns will be thrashed out as the world’s regulatory bodies grapple with generative ai. Microsoft and Google—and, by extension, their startup charges—have much deeper pockets than open-source developers to handle whatever the regulators come up with. They also have more at stake in preserving the stability of the information-technology system that has turned them into titans. For once, the desire for safety and for profits may be aligned.