Dr. Mohsen Ali and Dr. Izza Aftab have won Google Research Scholar Award 2021, according to an announcement on Google AI Blog. Dr. Ali is an assistant professor of computer science at the Information Technology University (ITU) in Lahore, Pakistan while Dr. Aftab is an assistant professor of economics at the same university.
|Google Research Award Recipients Professors Izza Aftab & Mohsen Ali. Source: Izza Aftab|
The Google Award is for a joint paper authored by the two Pakistani professors. It is entitled "Is Economics From Afar Domain Generalizable?". This research deals with the challenges of assessing economic indicators through machine learning. The paper shows how such indicators for a region can be developed from satellite imagery and geo-spatial datasets as an alternative to the on-site collection of administrative data that is done annually or biennially. This data helps design policy interventions, and paints a geo-spatial picture of economic well-being in developing countries like Pakistan. The findings from this project can aid governments and businesses.
|"Economics From Afar". Source: Dr. Mohsen Ali|
Dr. Izza Aftab's interests include theory and modeling of firm level innovation in developing countries, the Economics of climate change, and role of Big Data in informing Public Policy. Dr. Mohsen Ali's research interests include solving theoretical and practical problems entailing Computer Vision, Artificial Intelligence (AI) and Machine Learning, specifically on the problems related to image co-segmentation, remote sensing, medical imaging and affective computing, according to ITU website.
South Asia Investor Review
Pakistani-American is Top US Expert in Quantum Computing
AI Research at NED University Funded By Silicon Valley NEDians
Pakistan Hi-Tech Exports Exceed A Billion US Dollars in 2018
Pakistan Becomes CERN Member
Pakistani Scientists at CERN
Rising College Enrollment in Pakistan
Pakistani Universities Listed Among Asia's Top 500 Jump From 16 to 23 in One Year
Genomics and Biotech Research in Pakistan
Human Capital Growth in Pakistan
Educational Attainment in Pakistan
Pakistan Human Development in Musharraf Years
Mashallah this is a great news Sir.
Pakistani students win laurels in international competition
The success of Pakistani students who study in schools without walls in the global Huawei competition shows that there is no alternative to dedication and hard work, deputy chief executive officer of Huawei Pakistan Ahmed Bilal Masud said on Tuesday.
He was speaking at a ceremony to honour Sateesh Kumar, Bhagchand Meghwar and Iqra Fatima, winners of the Sixth Huawei ICT Competition 2021-2022.
They were able to beat competitors from different parts of the world at the Global Final of the event held in Shenzhen, China.
The participants were informed that Mr Kumar belonged to a remote village of Tharparkar that is not even listed in maps while Bhagchand Meghwar hails from a village in Dadu district.
“This proves that now, nobody in Pakistan can say that they lack resources, internet at homes or there was not enough support to make a name,” Mr Masud said, adding that, “these gentlemen did not even have walls in the schools they went to for initial learning.”
Similarly, he said the third member of the winning team was a female. “When she can fight the odds why not others,” he added.
Ms Fatima belongs to Bahawalpur and studied in Bahawalpur University, while the other two members had studied in Mehran University Jamshoro. Every year the competition is announced by Huawei, starting from the local level to encourage fresh students and fresh graduates to excel in information technology (IT) services.
Out of around 12,000 applicants in Pakistan in 2021, six were selected and Huawei managers formed two teams – team 1 and 2 for Pakistan. Sateesh Kumar led team 1 which continued its winning streak to beat competitors in the global final. The competition attracted 150,000 hopeful students from more than 2,000 universities in 85 countries and regions around the world.
The first prize in the competition was $20,000 for the winning team, along with mobile phones for each participant. Public Relations Director Wu Han said the success of the 2021 competition has shown that there was huge potential of growth among Pakistani youth.
What is ChatGPT? The AI chatbot talked up as a potential Google killer
After all, the AI chatbot seems to be slaying a great deal of search engine responses.
ChatGPT is the latest and most impressive artificially intelligent chatbot yet. It was released two weeks ago, and in just five days hit a million users. It’s being used so much that its servers have reached capacity several times.
OpenAI, the company that developed it, is already being discussed as a potential Google slayer. Why look up something on a search engine when ChatGPT can write a whole paragraph explaining the answer? (There’s even a Chrome extension that lets you do both, side by side.)
But what if we never know the secret sauce behind ChatGPT’s capabilities?
The chatbot takes advantage of a number of technical advances published in the open scientific literature in the past couple of decades. But any innovations unique to it are secret. OpenAI could well be trying to build a technical and business moat to keep others out.
What it can (and can’t do)
ChatGPT is very capable. Want a haiku on chatbots? Sure.
How about a joke about chatbots? No problem.
ChatGPT can do many other tricks. It can write computer code to a user’s specifications, draft business letters or rental contracts, compose homework essays and even pass university exams.
Just as important is what ChatGPT can’t do. For instance, it struggles to distinguish between truth and falsehood. It is also often a persuasive liar.
ChatGPT is a bit like autocomplete on your phone. Your phone is trained on a dictionary of words so it completes words. ChatGPT is trained on pretty much all of the web, and can therefore complete whole sentences – or even whole paragraphs.
However, it doesn’t understand what it’s saying, just what words are most likely to come next.
Open only by name
In the past, advances in artificial intelligence (AI) have been accompanied by peer-reviewed literature.
In 2018, for example, when the Google Brain team developed the BERT neural network on which most natural language processing systems are now based (and we suspect ChatGPT is too), the methods were published in peer-reviewed scientific papers, and the code was open-sourced.
And in 2021, DeepMind’s AlphaFold 2, a protein-folding software, was Science’s Breakthrough of the Year. The software and its results were open-sourced so scientists everywhere could use them to advance biology and medicine.
Following the release of ChatGPT, we have only a short blog post describing how it works. There has been no hint of an accompanying scientific publication, or that the code will be open-sourced.
To understand why ChatGPT could be kept secret, you have to understand a little about the company behind it.
OpenAI is perhaps one of the oddest companies to emerge from Silicon Valley. It was set up as a non-profit in 2015 to promote and develop “friendly” AI in a way that “benefits humanity as a whole”. Elon Musk, Peter Thiel, and other leading tech figures pledged US$1 billion (dollars) towards its goals.
Their thinking was we couldn’t trust for-profit companies to develop increasingly capable AI that aligned with humanity’s prosperity. AI therefore needed to be developed by a non-profit and, as the name suggested, in an open way.
In 2019 OpenAI transitioned into a capped for-profit company (with investors limited to a maximum return of 100 times their investment) and took a US$1 billion(dollars) investment from Microsoft so it could scale and compete with the tech giants.
It seems money got in the way of OpenAI’s initial plans for openness.
Profiting from users
On top of this, OpenAI appears to be using feedback from users to filter out the fake answers ChatGPT hallucinates.
According to its blog, OpenAI initially used reinforcement learning in ChatGPT to downrank fake and/or problematic answers using a costly hand-constructed training set.
How the algorithm tipped the balance in Ukraine
Vast data battlefield
The “kill chain” that I saw demonstrated in Kyiv is replicated on a vast scale by Ukraine’s NATO partners from a command post outside the country. The system is built around the same software platform developed by Palantir that I saw in Kyiv, which can allow the United States and its allies to share information from diverse sources — ranging from commercial satellite imagery to the West’s most secret intelligence tools.
This is algorithmic warfare, as Karp says. Using a digital model of the battlefield, commanders can penetrate the notorious “fog of war.” By applying artificial intelligence to analyze sensor data, NATO advisers outside Ukraine can quickly answer the essential questions of combat: Where are allied forces? Where is the enemy? Which weapons will be most effective against enemy positions? They can then deliver precise enemy location information to Ukrainian commanders in the field. And after action, they can assess whether their intelligence was accurate and update the system.
Data powers this new engine of war — and the system is constantly updating. With each kinetic strike, the battle damage assessments are fed back into the digital network to strengthen the predictive models. It’s not an automated battlefield, and it still has layers and stovepipes. The system I saw in Kyiv uses a limited array of sensors and AI tools, some developed by Ukraine, partly because of classification limits. The bigger, outside system can process highly classified data securely, with cyber protections and restricted access, then feed enemy location data to Ukraine for action.
To envision how this works in practice, think about Ukraine’s recent success recapturing Kherson, on the Black Sea coast. The Ukrainians had precise intelligence about where the Russian were moving and the ability to strike with accurate long-range fire. This was possible because they had intelligence about the enemy’s location, processed by NATO from outside the country and then sent to commanders on the ground. Armed with that information, the Ukrainians could take the offensive — moving, communicating and adjusting quickly to Russian defensive maneuvers and counterattacks.
And when Ukrainian forces hit Russian command nodes or supply depots, it’s a near certainty that they have received enemy location data this way. Mykhailo Fedorov, Ukraine’s minister of digital transformation, told me that this electronic kill chain was “especially useful during the liberation of Kherson, Izium, Kharkiv and Kyiv regions.”
What makes this system truly revolutionary is that it aggregates data from commercial vendors. Using a Palantir tool called MetaConstellation, Ukraine and its allies can see what commercial data is currently available about a given battle space. The available data includes a surprisingly wide array, from traditional optical pictures to synthetic aperture radar that can see through clouds, to thermal images that can detect artillery or missile fire.
To check out the range of available data, just visit the internet. Companies selling optical and synthetic aperture radar imagery include Maxar, Airbus, ICEYE and Capella. The National Oceanic and Atmospheric Administration sells simple thermal imaging meant to detect fires but that can also register artillery explosions.
In our Kherson example, Palantir assesses that roughly 40 commercial satellites will pass over the area in a 24-hour period. Palantir normally uses fewer than a dozen commercial satellite vendors, but it can expand that range to draw imagery from a total of 306 commercial satellites that can focus to 3.3 meters. Soldiers in battle can use handheld tablets to request more coverage if they need it. According to a British official, Western military and intelligence services work closely with Ukrainians on the ground to facilitate this sharing of information.
The ChatGPT King Isn’t Worried, but He Knows You Might Be
By Cade Metz
Sam Altman sees the pros and cons of totally changing the world as we know it. And if he does make human intelligence useless, he has a plan to fix it.
I first met Sam Altman in the summer of 2019, days after Microsoft agreed to invest $1 billion in his three-year-old start-up, OpenAI. At his suggestion, we had dinner at a small, decidedly modern restaurant not far from his home in San Francisco.
Halfway through the meal, he held up his iPhone so I could see the contract he had spent the last several months negotiating with one of the world’s largest tech companies. It said Microsoft’s billion-dollar investment would help OpenAI build what was called artificial general intelligence, or A.G.I., a machine that could do anything the human brain could do.
Later, as Mr. Altman sipped a sweet wine in lieu of dessert, he compared his company to the Manhattan Project. As if he were chatting about tomorrow’s weather forecast, he said the U.S. effort to build an atomic bomb during the Second World War had been a “project on the scale of OpenAI — the level of ambition we aspire to.”
He believed A.G.I. would bring the world prosperity and wealth like no one had ever seen. He also worried that the technologies his company was building could cause serious harm — spreading disinformation, undercutting the job market. Or even destroying the world as we know it.
Mr. Altman argues that rather than developing and testing the technology entirely behind closed doors before releasing it in full, it is safer to gradually share it so everyone can better understand risks and how to handle them.
He told me that it would be a “very slow takeoff.”
When I asked Mr. Altman if a machine that could do anything the human brain could do would eventually drive the price of human labor to zero, he demurred. He said he could not imagine a world where human intelligence was useless.
If he’s wrong, he thinks he can make it up to humanity.
He rebuilt OpenAI as what he called a capped-profit company. This allowed him to pursue billions of dollars in financing by promising a profit to investors like Microsoft. But these profits are capped, and any additional revenue will be pumped back into the OpenAI nonprofit that was founded back in 2015.
His grand idea is that OpenAI will capture much of the world’s wealth through the creation of A.G.I. and then redistribute this wealth to the people. In Napa, as we sat chatting beside the lake at the heart of his ranch, he tossed out several figures — $100 billion, $1 trillion, $100 trillion.
If A.G.I. does create all that wealth, he is not sure how the company will redistribute it. Money could mean something very different in this new world.
But as he once told me: “I feel like the A.G.I. can help with that.”
Post a Comment