Arvind's Newsletter

Issue No #749

1.Taiwanese Contract Manufacturer Wistron stops making iPhones in India and is in the process of selling its assembly unit in Kolar, near Bengaluru, to the Tata Group. The company’s departure comes as Apple’s other contract makers Foxconn and Pegatron expand their presence in India.

Wistron was the first of Apple’s three global contract manufacturers to start assembling iPhones in India in 2017. But its inability to get deeper into Apple’s supply chain--component manufacturing and vendor-managed inventory holding—is one of the key reasons the Taiwanese contract manufacturer is exiting India.

Then there was the challenge of coping with the local work culture. Violence broke out at the newly opened Kolar unit in December 2020 as workers protested against allegedly unpaid wages and arduous hours. That episode cost the company about Rs 430 crore in damages. Apple put Wistron on probation, halting the unit until corrective measures were taken. It resumed work in February 2021

While the deal is yet to be closed, Tata executives are already taking up key positions in the company reported the Economic Times.

.2.Food delivery by drone is just part of daily life in Shenzhen.The Chinese delivery giant Meituan flies drones between skyscrapers to kiosks around the city reports MIT Technology Review. See the video as part of the following article.

“My iced tea arrived from the sky.

In a buzzy urban area in Shenzhen, China, sandwiched between several skyscrapers, I watched as a yellow-and-black drone descended onto a pickup kiosk by the street. The top of the vending-machine-size kiosk opened up for the drone to land, and a white cardboard box containing my drink was placed inside. When I had made the delivery order on my phone half an hour before, the app noted that it would arrive by drone at 2:03 p.m., and that was exactly when it came. The drone delivery service I was trying out is operated by Meituan, China’s most popular food delivery platform. “ Zomato/Swiggy are you watching?

3.Meta’s new AI models can recognise 4000 spoken languages and produce speech for more than 1,000 languages, reports Engadget.

Meta has built AI models that can recognise and produce speech for more than 1,000 languages—a tenfold increase on what’s currently available. It’s making them open source to help developers working in different languages to build new speech applications—like messaging services that understand everyone, or virtual-reality systems that can be used in any language.

Meta says the new project is a significant step toward preserving languages that are at risk of disappearing. Although there are around 7,000 languages in the world, existing speech recognition models cover only about 100 of .

4.People in the U.S. Think They Are Better Than They Actually Are. People in Asia Don’t. Western individualism may promote a “better than you actually are” mindset reports the Scientific American. 

How competent are you, compared with your colleagues? When psychologists approach teams of coworkers with variations of this question, an interesting pattern emerges. If people have a truly realistic perspective of their abilities, then their self-assessments should generally fall around the middle. Instead psychologists have repeatedly found that people’s self-assessments are inflated. In fact, superstars and under-performers alike tend to think they are better than they truly are.

This effect is one example of a positive illusion: a cognitive bias that makes you feel more competent, more blessed, more fortunate and better than you are. Positive illusions seem intuitive and reasonable to many people. Some scholars argue that these illusions are fundamental to our species’ survival. To get by in life, they reason, you must remain optimistic, work hard, succeed, live long and leave offspring behind.

Of course, some people don’t experience positive illusions and have a more realistic self-assessment. Unfortunately, such self-appraisals could make them feel more inadequate when comparing themselves with many others who have a very positive self-assessment. These comparisons may be an important cause of imposter syndrome—the suspicion that one is not deserving of one’s achievements. In other words, imposter syndrome may be the dark side of the societal norm toward positive selves.

But there is an important caveat to this discussion: the available evidence is based almost exclusively on a small fraction of humanity called Westerners. If positive illusions were truly essential to our species, we would expect them to be universal. But my work—and that of other research teams—suggests otherwise.” Read on.

5.Regulate us say OpenAI and Google.
An international agency is needed to regulate super-intelligent artificial intelligence systems, the founders of OpenAI argued. They said AI systems could be better than humans at most intellectual tasks within 10 years. That could “lead to a much better world than we see today,” but carries risks. They proposed a regulatory body like the International Atomic Energy Agency, with authority to carry out audits and inspections on AI developers. In the Financial Times, Google’s CEO also said that AI is “too important not to regulate well.” Big companies often back regulatory hurdles that could keep smaller competitors out of the market, but both OpenAI and Google have consistently warned of the risks of AI.

Sundar Pichai, Google CEO: Building AI responsibly is the only race that really matters in the Financial Times. Long Read.

“This year, generative AI has captured the world’s imagination. Already, millions of people are using it to boost creativity and improve productivity. Meanwhile, more and more start-ups and organisations are bringing AI-powered products and technologies to market faster than ever.

AI is the most profound technology humanity is working on today; it will touch every industry and aspect of life. Given these high stakes, the more people there are working to advance the science of AI, the better in terms of expanding opportunities for communities everywhere.

While some have tried to reduce this moment to just a competitive AI race, we see it as so much more than that. At Google, we’ve been bringing AI into our products and services for over a decade and making them available to our users. We care deeply about this. Yet, what matters even more is the race to build AI responsibly and make sure that as a society we get it right. 

We’re approaching this in three ways. First, by boldly pursuing innovations to make AI more helpful to everyone. We’re continuing to use AI to significantly improve our products — from Google Search and Gmail to Android and Maps. These advances mean that drivers across Europe can now find more fuel-efficient routes; tens of thousands of Ukrainian refugees are helped to communicate in their new homes; flood forecasting tools are able to predict floods further in advance. Google DeepMind’s work on AlphaFold, in collaboration with the European Molecular Biology Laboratory, resulted in a groundbreaking understanding of over 200mn catalogued proteins known to science, opening up new healthcare possibilities.

Our focus is also on enabling others outside of our company to innovate with AI, whether through our cloud offerings and APIs, or with new initiatives like the Google for Startups Growth program, which supports European entrepreneurs using AI to benefit people’s health and wellbeing. We’re launching a social innovation fund on AI to help social enterprises solve some of Europe’s most pressing challenges.

Second, we are making sure we develop and deploy the technology responsibly, reflecting our deep commitment to earning the trust of our users. That’s why we published AI principles in 2018, rooted in a belief that AI should be developed to benefit society while avoiding harmful applications.

We have many examples of putting those principles into practice, such as building in guardrails to limit misuse of our Universal Translator. This experimental AI video dubbing service helps experts translate a speaker’s voice and match their lip movements. It holds enormous potential for increasing learning comprehension but we know the risks it could pose in the hands of bad actors and so have made it accessible to authorised partners only. As AI evolves, so does our approach: this month we announced we’ll provide ways to identify when we’ve used it to generate content in our services.

Finally, fulfilling the potential of AI is not something one company can do alone. In 2020, I shared my view that AI needs to be regulated in a way that balances innovation and potential harms. With the technology now at an inflection point, and as I return to Europe this week, I still believe AI is too important not to regulate, and too important not to regulate well.

Developing policy frameworks that anticipate potential harms and unlock benefits will require deep discussions between governments, industry experts, publishers, academia and civil society. Legislators may not need to start from scratch: existing regulations provide useful frameworks to manage the potential risks of new technologies. But continued investment in research and development for responsible AI will be important — as will ensuring AI is applied safely, especially where regulations are still evolving.

Increased international co-operation will be key. The US and Europe are strategic allies and partners. It’s important that the two work together to create robust, pro-innovation frameworks for the emerging technology, based on shared values and goals. We’ll continue to work with experts, social scientists and entrepreneurs who are creating standards for responsible AI development on both sides of the Atlantic.

AI presents a once-in-a-generation opportunity for the world to reach its climate goals, build sustainable growth, maintain global competitiveness and much more. Yet we are still in the early days, and there’s a lot of work ahead. We look forward to doing that work with others, and together building AI safely and responsibly so that everyone can benefit.