Tải bản đầy đủ - 0 (trang)
1Super Intelligence: Computers Are Taking Over—Realistic Scenario or Science Fiction?

1Super Intelligence: Computers Are Taking Over—Realistic Scenario or Science Fiction?

Tải bản đầy đủ - 0trang

252    

P. Gentsch



We humans are acting in an uncontrollable environment. Through constant interaction with our environment, we are learning more and more,

mostly without even noticing it.

To do this, we firstly have to be able to perceive our environment. Step by

step, we are getting to know the meaning of this perception. We get to know

our mother’s voice when we are still in her womb, for example, yet the significance of this person only becomes clear step by step.

We therefore initially classify an object. We effortlessly test out our environment. By dropping toys, we get to know gravity. We learn that the hot

food cools down all by itself if we wait long enough. This means that as

early as at the age of two we have a good intuition of physical correlations in

our world and how they ineract with us. We also classify increasingly more

objects and assing different properties to them. This is how our common

sense is developed and we are able in a certain way to predict situations such

as “if I drop the glass, it will break”. This ability accounts for a large part of

our intelligence.

With further development, we can abstract this classification of objects.

The abstraction makes it possible to compare different objects or even situations that objectively have nothing in common. By doing this, we can transfer strategies that we have successfully learned in a situation to a different

situation. Our ability to transfer is a further key pillar of our intelligence.

How much sense it makes, however, to derive with our brain more

about the way our brain processes from research data and precisely how

this research data can be depicted at all, is another, very interesting topic of

discussion.

How is our intelligence to become manifested with machines?

There is software already available that is far superior to humans in some

areas. In 1996, IBM’s Deep Blue defeated the reigning world champion in

chess for the first time. 20 years later, in 2016, AlphaGo won at the more

comlex Japanese version Go and these are only the famous examples.

The rules of the games were implemented into both systems, i.e. added

into the system and trained for many years. The algorithms both systems use

analyse the situation of the game and decide in favour of the strategy branch

with the highest probability of success. Machines build up this strategy tree

bit by bit during training. Similar to a human one would think, but simply

a machine.

Yet, the great difference is that the same systems would be a complete and

utter failure at “Ludo”. Even the first move would be impossible, as the rules

of the game would first have to be implemented by programmers. And even

if both systems were taught the rules of the game, they would not be able to



6  Conclusion and Outlook …    

253



transfer the strategies to the new game. And it would also not be possible for

them to differentiate between short-term tactics ans long-term strategies. For

games like chess or Go, that does not really matter. But all the more so if we

want to discharge the systems into the rough world.

Expert systems nowadays are thus already superior to humans in very

naroow areas, but general intelligence with abstraction processes and transfer

skills of what has already been learned, as a human-level AI system would

demand, has not been achieved in the slightest.

Almost all of today’s commercial successes of AI systems can be lead back

to supervised learning algorithms. To this end, the systems are shown huge,

already classified amounts of data. On the basis of this evidence, the system then automatically adapts the Verknüpfungsgewichte between the individual points of representation of the problem (the formal neurones). This

way, individual sub-aspects of the solution are emphasised more than ­others.

Finally, the system puts the solution together and ideally, translates the

solutions from representational coding into a form that can be analysed by

humans.

The comparison with sample solutions helps the system to evaluate its

own result. By way of penalties or rewards, the system sees whether the

learning process brings about the desired result or not. Similar to a pupil,

the system is given a penalty or a reward: The principle of reinforcement

learning.

The next step in emancipating the systems towards human-level AI are

unsupervised learning algorithms that work in the use case. This is about

unsupervised learning like with children that explore their surroudings

and learn to interact with them. Here, despite current small breathroughs,

research is still at square one.

As of late, there has been promising progress in the field of unsupervised learning. In 2017, the research group around Anh Nguyen from the

University of Wyoming succeeded in producing synthetically generated high

resolution images of volcanoes, buildings and animals. Yet, even during the

training of these “Plug & Play Generative Networks”, much already classified trainig data was taken. To this day, no researcher has succeeded in anything similar from mere raw data.

The problems researchers face today are as multifaceted as the field itself.

There is thus no known representation known to date that enables

machines to sufficiently extract the results to apply what has been learned

outside the training context. Until now, networks only abstract very superficially. For example, a specially trained network recognises animals in

an image due to the high vegetation in the background—irrespective of



254    

P. Gentsch



whether there actually is an animal in the image or not. That logically leads

to many false positive results. Concept learning, in which we humans are

true masters from birth, is a huge problem for machines.

To date, there are no known efficient communication symbols for the

human-computer interface. Indeed, the AI community has been abe to celebrate remarkable accomplishments of late in the field of machine speech

recognition and translation, which everybody uses, for example, in YouTube

substitles or with the Google Translator, yet machines do not understand the

spoken word like we do. Thus the direct learning of machines for ­systems

has been hardly possible to date. The correlation between facts, figures,

targets, strategies and communication must continue to be implemented

system- and problem-specifically. And the way things are looking, that will

stay that way for quite a whilst yet. Even the summarising and presentation

of results in formats comprehensible for humans is a great problem for many

systems and has to be developed for each system individually.

Learning algorithms are extremely resource-intensive. An extreme amount

of computing power and time is needed to train a system adequately, as

the entire network has to be re-simulated for every symbol, quasi each new

fact. And to date, there has been a lack of a machine-episodical memory or

a long-term memory, meaning that the computer forgets everything it has

learned hitherto when a new learning process is completed.

“Learning to learn” is certainly the decisive mantra for the next intelligence for the next level of maturity. Today, people are still trying to define

the best learning algorithm for the system. In the future, AI systems will find

the best way to learn for themselves. On the basis of a kind of meta learning

process, we delegate as it were the determination of the ideal learning algorithm. This kind of AI autonomous learning goes far beyond the learning

paradigms of today’s machine learning. The “general problem solver” could

in this way also universally beat the world chamion in chess, Jeopardy, Go

and “Ludo” by always learning for itself the best solution algorithm.

Another problem is reasoning in line with common sense. A computer

only knows facts that are explicitly specified and accessible. For us humans,

implicit knowledge is a matter of course. When we compose a legal text, we

know that colloquial expressions are out of place in it. This knowledge and

the framework conditions resulting from it for the further processing of the

information has had to be explicitly and problem-specifically implemented

in the machines up till now.

AI is also a firm part of current research in robotics. Almost all problems

are multimodal and cannot simly be transferred into one target function for

machines. Facebook and DeepMind are indeed, working on a physics-based

virtual environment to train such systems. But there is no system to date



6  Conclusion and Outlook …    

255



that is comprehensive enough to implement the demands on multi-tasking

that our environment makes of us.

For example, self-driving cars do not recognise people as intelligent beings

with their own home range and repertoire of strategies, but as an obstacle. The interaction with the environment is inadequate to this day. The

defensive driving style resulting from that is still far from the optimum of

possibilities.

In summary, it can be said that this super intelligence will come due to

the rapid development and technological scaling. The question as to “when”

is certainly difficult to answer. Each advance uncovers new questions and

obstacles. A precise answer to this question according to the current state

of research is not yet possible. An incredible amount has already happened.

Some things are already possible that were only conceivable in sience fiction ten years ago. But there is still an incredible lot to do. And on the way

there, increasingly more progress that we can already use for ourselves will

be made. There is no field where the correlation between basic research and

science and industrial application is as close as in AI. If we once take a look

behing the backend scenes, some of us would be amazed at how significantly

our technological landscape is already affected by AI and how much of that

we already use.

If we compare various studies and expert statements, the tipping point to

super intelligence is taxieren at between 2040 and 2090.

It is certain that we are on the brink of groundbreaking technology that

will continue to significantly influence all of our lives and already does

today. In the future, we will interact with AI systems very intimately, be it

in everyday life or in our professional life. As these systems are developed to

improve our life circumstances and to maximise our performance, we should

not give into the fear of substitution by software. Human-level AI by no

means means the creation of a new intelligent machine species will successively eliminate us from many areas of life. In fact it means that we reach the

next level of human performance, with AI systems as our vehicle.

This general problem seeker and solver of super intelligence would then

also mean the highest level of maturity of algorithmic support for companies. The vision of the more or less deserted and self-operating company

would become reality. In order to prevent a full loss of control, it would

have to be ensured that humans lay down and monitor the framework and

conditions of the AI-based “learning to learn” system. This also includes the

control of the red OFF switch that is frequently seen as a safety anchor. Yet,

a self-learning AI system will also learn to understand such switches and how

to switch them off. Otherwise we will actually run the risk of being mastered

by systems sooner or later—hasta la vista, baby!



256    

P. Gentsch



6.2AI: The Top 11 Trends of 2018 and Beyond

Besides the development towards super intelligence, there are at present a

multitude of developments in the field of AI. I the following, the key trends

that have the greatest impact on business are summarised compactly:

1.AI first: Analogue to the “mobile first” mantra, particularly with companies such as Facebook, Microsoft and Google “AI first” prevails: No

development without investigating and utilising the AI potentials. At this

stage, that is certainly also a sure overvaluation due to the immense hype.

At present, a downright arms war is taking place among the AI applications of the GAFA world. The M&A is equally interesting in the field

for AI and febrile at the same time. Similar to mobile, AI will increasingly become a matter of course in the years to come, so that the adjunct

“First” will disappear. In any case, this “AI first” mantra of the digital

giants, coupled with the corresponding making available of knowledge

and codes, will be a push in AI for many other industries and companies.

2.AI will not really become intelligent, yet nevertheless increasingly important for business: The discussion about the question as to

whether and when AI is really intelligent is as old as it is unsolved. The

analogy of neuronal networks suggests the intelligence claim of AI on

the basis of the apparent reproduction of the human brain. Yet, even

massively switched neuronal networks in parallel do not represent the

human brain. To this date, how the brain really works is unexplored,

how creativity can actually be generated and reproduced.

Thanks to the immense increase in computing capacities, AI systems

are increasingly creating the impression of human intelligence, because

they are able to interrelate and analyse huge amounts of data in not time

at all and, in this way, make good decisions autonomously. A human

could never interrelate these huge, heterogenous and distributed data

sets. Thanks to the AI-based reasoning of these data universes, seemingly

innovative and creative results can also be generated, whereby only existing information—even if immensely large and complex—can be analysed. Even the much-quoted and discussed deep learning is not really

intelligent in this spirit. In the same way, the software that can develop

new software itself is conditioned and determined by the original intelligence of the original developer.

From a business perspective, the discussion about the real intelligence

must, however, have an academic appearance. After all, the quasi intelligence that simulates human intelligence increasingly better helps to



6  Conclusion and Outlook …    

257



support important business processes and tasks or to even perform them

autonomously. For this reason, the AI development of today will change

business rapidly and sustainably when it comes to intelligence, despite

the not really existent quantum leap.

3.Specific AI systems: The dream of general AI systems independent of

functions and sectors has to be dreamed for another whilst. This general intelligence shall remain the grandeur of humans for now. IBM’s

Deep Blue was indeed able to beat the former chess world champion

Kasparow mressively, but will have great difficulty in defeating the

Korean world champion in the board game Go.

In contrast, an increasing number of domain-specific AI systems are

being successfully developed and established: Systems for certain functions such as lead prediction in sales, service bots in service or forecasts

of validity. This narrow intelligence will increasingly support corporate

functions and also replace them.

4.AI inside—embedded AI: AI is bing integrated in more and more

devices, processes and products. This way, AI is more frequently managing the leap from the AI workbench to business. Examples are the clever

Alexa by Amazon, the self-driving car, the speech-controlled Siri by Apple

or the software that automatically detects, classifies and addresses leads.

The label “AI inside” will thus become more and more a given. After all,

almost any physical object, any device can become smart through AI.

5.Democratisation of AI: Despite the immense potential of AI, only a

few companies use technologies and methods of AI. This is frequently

associated with the lack of access to skills and technologies. Frameworks

such as Wit.ai by Facebook and Slack by Howdy alleviate the simple

development of AI applications by way of modules and libraries. With

tools like TensorFlow (machine learning) or Bonsai (search as a service),

somewhat more sophisticated AI applications can be programmed.

So-called AI as a service providers go one step further. DATAlovers, for

example, provides AI methods for the analysis of business data as a service. The AI services AWS (Amazon) cover cloud-native machine learning and deep learning for various use cases. Cloud platforms such as

Amazon’s AWS, Google’s APIs or Microsoft Azure additionally enable

the use of infrastructures with good performance to develop and use AI

applications.

6.Methodical trend deep learning: Back to the roots—just more massively: Many examples (e.g. the victory over the Korean world champion

in Go, sales prediction) impressively show the potential of deep learning. The interesting thing about this trend is that the methodical basis



Tài liệu bạn tìm kiếm đã sẵn sàng tải về

1Super Intelligence: Computers Are Taking Over—Realistic Scenario or Science Fiction?

Tải bản đầy đủ ngay(0 tr)

×