Arvind's Newsletter

Issue No #743

  1. Even as IMD predicts monsoon will arrive in Kerala by June 4 (with a model error of ±4 days), the private sector weather forecaster Skymet predicts that the monsoon will be delayed this year.
    Last month, Skymet forecast below-normal monsoon rains for 2023, posing mounting risks to rural incomes, consumption and economic growth. It had said that rains in the June-September season will be 94% of the long-period average (LPA) due to the impact of the El Niño weather pattern, linked to droughts or poor rainfall in India.

2.Technology is rapidly reshaping agriculture in India, opines an article in the Mckinsey Quarterly

In this article, the authors examine agtech’s potential, how it is already improving outcomes, and what investors are looking for as rural India embraces modern farming. Agtech can be a shot in the arm for India’s farmers, making them more profitable and boosting the contribution of agriculture to India’s economy.

Between 2013 and 2020, the agtech landscape in India grew from less than 50 start-ups to more than 1,000, fueled by increased farmer awareness, rising internet penetration in rural India, and the need for greater efficiency in the agriculture sector. Moreover, India’s regulatory environment is gradually evolving to facilitate the growth of digital technologies in agriculture.

Agtech in India continues to ramp up—from core companies in the value chain using digital technologies like “super apps” to innovations by start-ups, or “agrifintechs,” and large technology companies.

Fully nurtured, the agtech ecosystem has the potential to propel Indian farmers’ incomes to grow by 25 to 35 percent , and add $95 billion to the Indian economy, through reduction of input costs, enhanced productivity and price realization, cheaper credit, and alternative incomes

3An early trial began into a universal mRNA flu vaccine. Around 400,000 people die of flu each year worldwide, but existing flu vaccines have to be targeted at specific strains: This means public health systems guess months in advance which strains are likely to be prevalent in a coming flu season, so manufacturers can prepare. A universal vaccine would protect against all strains, by targeting proteins common to all flu viruses. They might also provide longer-lasting protection, replacing the need for regular boosters. The U.S. National Institutes of Health is recruiting around 50 volunteers at Duke University. If this Phase 1 trial shows the vaccine is safe and induces a promising immune response, a larger trial looking at efficacy will follow.

4.Don’t use sugar substitutes for weight loss, World Health Organization advises.People should reduce the sweetness of the diet altogether, starting early in life.

The global health body said a systematic review of the available evidence suggests the use of non-sugar sweeteners, or NSS, “does not confer any long-term benefit in reducing body fat in adults or children.”

The guidance applies to all people except those with preexisting diabetes, Branca said. Why? Simply because none of the studies in the review included people with diabetes, and an assessment could not be made, he said.

5.The race to bring generative AI to mobile devices , reports Richard Waters of Financial Times.

The race is on to bring the technology behind ChatGPT to the smartphone in your pocket. And to judge from the surprising speed at which the technology is advancing, the latest moves in artificial intelligence could transform mobile communications and computing far faster than seemed likely just months ago.

As tech companies rush to embed generative AI into their software and services, they face significantly higher computing costs. The concern has weighed in particular on Google, with Wall Street analysts warning that the company’s profit margins could be squeezed if internet search users come to expect AI-generated content in standard search results.

Running generative AI on mobile handsets, rather than through the cloud on servers operated by big tech groups, could answer one of the biggest economic questions raised by the latest tech fad. Google said last week that it had managed to run a version of PaLM 2, its latest large language model, on a Samsung Galaxy handset. Though it did not publicly demonstrate the scaled-down model, called Gecko, the move is the latest sign that a form of AI that has required computing resources only found in a data centre is quickly starting to find its way into many more places.

The shift could make services such as chatbots far cheaper for companies to run and pave the way for more transformative applications using generative AI. “You need to make the AI hybrid — [running in both] the data centre and locally — otherwise it will cost too much money,” Cristiano Amon, chief executive of mobile chip company Qualcomm, told the Financial Times. Tapping into the unused processing power on mobile handsets was the best way to spread the cost, he said.

When the launch of ChatGPT late last year brought generative AI to widespread attention, the prospect of bringing it to handsets seemed distant. Besides training the so-called large language models behind such services, the work of inferencing — or running the models to produce results — is also computationally demanding. Handsets lack the memory to hold large models like the one behind ChatGPT, as well as the processing power required to run them.

Generating a response to a query on a device, rather than waiting for a remote data centre to produce a result, could also reduce the latency, or delay, from using an application. When a user’s personal data is used to refine the generative responses, keeping all the processing on a handset could also enhance privacy. More than anything, generative AI could make it easier to carry out common activities on a smartphone, for instance when it comes to things that involve producing text. “You could embed [the AI] in every office application: You get an email, it suggests a response,” said Amon. “You’re going to need the ability to run those things locally as well as on the data centre.”

Some of the smaller models have already demonstrated surprising capabilities. They include LLaMa, an open-source language model released by Meta, which is claimed to have matched many of the features of the largest systems.

LLaMa comes in various sizes, the smallest of which has only 7bn parameters, far fewer than the 175bn of GPT-3, the breakthrough language model OpenAI released in 2020; the number of parameters in GPT-4, released this year, has not been disclosed. A research model based on LLaMa and developed at Stanford University has already been shown running on one of Google’s Pixel 6 handsets.

With most of the work on tailoring the models to handsets still at an experimental stage, it was too early to assess whether the efforts would lead to truly useful mobile applications, said Ben Bajarin, an analyst at Creative Strategies. He predicted relatively rudimentary apps, such as voice-controlled photo-editing functions and simple question-answering, from the first wave of mobile models with between 1bn and 10bn parameters.