As early as the 1950s, the field’s fathers, Minsky and McCarthy, described artificial intelligence as any task performed by machines previously thought of as human intelligence.

This is obviously a very broad definition. So sometimes we see debates about whether something is really AI.

Modern definitions of what it means to create intelligence are more specific. Francois Chollet, his AI researcher at Google and creator of the Keras machine-learning software library, says that intelligence is the ability for a system to adapt and improvise in new environments, generalize its knowledge, and learn from unfamiliar scenarios. It is related to the ability to apply to

“Intelligence is the efficiency of acquiring new skills on tasks you have not prepared for before,” he said.

“Intelligence is not a skill per se. It’s not about what you can do. It’s about how well and efficiently you learn new things.” The latest in AI-powered systems is the definition called “narrow AI”, the ability to generalize training when performing a limited set of tasks such as speech recognition or computer vision.

AI systems typically exhibit at least some of the following behaviours associated with human intelligence: planning, learning, reasoning, problem-solving, knowledge representation, perception, movement, manipulation, and to a lesser extent, social intelligence and creativity.


What are the uses for AI?
AI is ubiquitous today, used to recommend what you should buy next online, to understand what you say to virtual assistants, such as Amazon’s Alexa and Apple’s Siri, to recognise who and what is in a photo, spot spam, or spot spam detect credit card fraud

What are the different types of AI?
At a very high level, artificial intelligence can be split into two broad types:

Narrow AI

Narrow AI is what we see in the computers around us today. That is, intelligent systems that have been taught or learned how to perform specific tasks without being explicitly programmed.

This type of machine intelligence is evident in the speech and speech recognition of the Apple iPhone’s virtual assistant Siri, the visual recognition system of self-driving cars, or the recommendation engine that suggests products the user might like. purchased in the past. Unlike humans, these systems are called narrow AI because they can only learn or learn specific tasks.

General AI

AI in general is very different, a kind of adaptive intelligence found in humans, from cutting hair to creating spreadsheets or discussing various problems based on accumulated experience. , a flexible form of intelligence that can learn how to perform very different tasks.

This is the kind of AI more commonly seen in movies, such as his HAL in 2001 and Skynet in Terminator, but doesn’t exist today. AI experts are largely divided on how quickly it will become a reality.
What can Narrow AI do?
Narrow AI has many new uses.
• Interpretation of video feeds from drones performing visual inspections of infrastructure such as oil pipelines.
• Organize your personal and business calendars.
• Respond to basic customer service requests.
• Works with other intelligent systems to perform tasks such as booking a hotel at a convenient time and place.
• Assists radiologists in detecting potential tumors in X-ray images.
• Flag inappropriate online content and detect elevator wear and tear using data collected from IoT devices.
• Creating 3D models of the world from satellite images…the list goes on.
New applications for these learning systems are emerging all the time. Graphics card designer Nvidia recently introduced his AI-based system Maxine, which allows you to make high-quality video calls almost regardless of the speed of your internet connection. Rather than sending an entire video stream over the Internet, the system animates a few still images of the caller in a way that reproduces the caller’s facial expressions and movements in real-time. 10x reduction in bandwidth required for Not from the video, too different.


There is a lot of untapped potential in these systems, but sometimes the ambitions of the technology outweigh the reality. A classic example is a self-driving car supported by AI-supporting systems such as computer vision. Electric car company Tesla is well behind CEO Elon Musk’s original schedule to upgrade its Autopilot system from the system’s more limited driver-assistance features to “fully autonomous driving.” The group has recruited experienced drivers as part of a beta testing program



What can General AI do?

2012/13 study of four expert groups by AI researcher Vincent C. Müller and philosopher Nick Bostrom found that artificial general intelligence (AGI) has a 50% chance of being developed between 2040 and 2050. and could rise to 90% by 2075. , so-called “superintelligence” – defined by Bostrom as “intelligence that far exceeds human cognitive abilities in virtually any field of interest” – is expected about 30 years after the achievement of AGI.
However, recent assessments by AI experts are more cautious. Modern AI research pioneers such as Geoffrey Hinton, Demis Hassabis and Yann Le Cun say society is far from developing his AGI. Given the skepticism of modern AI leaders and the highly diverse nature of modern AGI narrow AI systems, fears that artificial intelligence in general will disrupt society in the near future are probably unfounded.
However, some AI experts consider such predictions to be overly optimistic given the limited understanding of the human brain, and AGI is still centuries away. I think that’s it.
What are recent landmarks in the development of AI?
While modern narrow AIs may be limited to performing specific tasks, these systems may be capable of achieving superhuman feats within their specialties, and in some cases are essentially human-like. They may even exhibit great creativity, a trait that is often regarded.

There were too many breakthroughs to put together a definitive list, but here are some highlights. driverless vehicle.
In 2011, IBM’s Watson computer system won the US quiz show Jeopardy! by beating two of the best players the show had ever produced. We used analysis of huge data stores that are processed to answer questions from humans, often in the blink of an eye. New breakthroughs have been made that demonstrate the potential of AI to handle a variety of new tasks. That year, the Alex Net system won the ImageNet Large Scale Visual Recognition Challenge decisively. Alex Net’s accuracy is very high, with half the error rate compared to competing systems in image recognition competitions.

Alex Net’s performance demonstrated the power of learning systems based on neural networks, a machine learning model that has existed for decades, but with the architectural improvements and leaps in parallel processing power made possible by Moore’s Law. has finally realized its potential. The ability of machine learning systems to perform computer vision was also highlighted this year when Google trained a system to recognize popular cat pictures on the Internet.


The next demonstration of the effectiveness of a machine learning system that caught the public eye was his Google DeepMind AlphaGo AI in 2016 beating a human grandmaster at Go. Go is an ancient Chinese game whose complexity puzzled computers for decades. Go has about 200 possible moves per round, while chess has about 20. There are so many possible moves in Go that it would be too computationally expensive to search each one in advance to determine the best move. Instead, AlphaGo trained how to play the game by taking moves that human experts had played in his 30 million games of Go and feeding them into Deep-His learning neural networks.

These deep learning networks can be time consuming to train, requiring large amounts of data to be ingested and repeated as the system incrementally refines the model to achieve the best results.

But recently Google has improved their training process with AlphaGo Zero. This is a system that plays a “completely random” game against itself and learns from it.
Google DeepMind CEO Demis Hassabis also announced a new version of AlphaGo Zero for chess and shogi.

And AI continues to sprint past new milestones: A system trained by Open AI beats the world’s best player in a single battle in the online multiplayer game Dota 2.

Same year, Open AI worked together to create AI agents that invented their own language to more effectively achieve their goals, and subsequently trained agents to negotiate and lie on Facebook.

2020 was the year when AI systems seemed to acquire the ability to write and speak like humans on almost every topic imaginable.

The system in question, known as Generative Pre-trained Transformer 3, or GPT-3 for short, is a neural network trained on billions of English-language articles available on the open web.

Soon after being released for testing by the nonprofit Open AI, the internet was flooded with GPT-3’s ability to generate articles on almost any topic given to it. human. Similarly, impressive results continued in other areas, convincingly answering questions on a variety of topics and getting me around as a novice JavaScript programmer.

However, while many of the articles generated by GPT-3 had a smattering of truth, further testing showed that the generated texts often failed, providing superficially plausible but confusing statements.

There is still a lot of interest in using natural language understanding of models as a basis for future services. Some developers can integrate into their software via Open AI’s beta APIs. It will also integrate with future services available through Microsoft’s Azure cloud platform.

Perhaps the most impressive example of AI’s potential came in late 2020, when Google’s Alpha Fold 2 attention-based neural network showed results worthy of the Nobel Prize in Chemistry. The ability of the system to interrogate protein building blocks known as amino acids and infer the 3D structure of that protein could have a significant impact on the understanding of disease and the speed of drug development. In a critical evaluation of a protein structure prediction contest, Alpha Fold 2 determined his 3D structure of a protein with an accuracy comparable to crystallography, the gold standard of convincing protein modeling.