Do you think we should be careful with artificial intelligence? Do you see a future problem with AI?

Artificial intelligence is not the first special piece of technology we have had to deal with.There was a time when nuclear energy was also quite lax.

Recently I read a good book about the life of Richard Feynmann.It also described extensively what his contributions were in developing the atomic bomb. He has had some rare cancers in his life where he eventually died, but at that time in developing the atomic bomb there were no safety regulations.

The story tells about how he devised simple math tricks to ensure that different amounts of fissured material were not put too close together.This he learned from his colleagues in logistics, who, for security reasons, knew where what stood, he was not allowed at that time. Put containers with enough fissable material too close together and the mess blows itself up.

This was a real risk, so it is also told of a scientist without any kind of protection with and Tang just a little too much uranium put together.He sees the blue glow of the ionization of the oxygen and knows just before the mess is critical and blows up, to take away the piece again. Just in time for his colleagues, just too late for him. He died several days later from the effects of acute radiation sickness.

This whole would be unthinkable today.With all known risks and associated rules to avoid these risks we have learned to deal with it appropriately. Nuclear fission is an exact science and a well manageable tool.

AI is less tangible, but we are now using our bare hands to steer this into the world without knowing what we are doing now.It is in its infancy.

As we are getting better at learning what it is and what it can be we are getting better to master how we apply it.

Perhaps a consolation.What we mean by AI nowadays is only training a statistical system. AI is about predicting behavior based on behavior in the past. It is not yet true intelligence, an AI cannot reason yet.

Not to make the story any longer.Certainly we should be careful, but I do not see so much that we are getting into trouble. I am convinced that we can control this.

It depends on what you know under AI.

Currently, AI refers mainly to current developments in data driven developments around 鈧?虄deep Learning or complex neural networks.

In addition, AI can also refer to the research and development of knowledge representation (a current from the years 70).

These forms of artificial intelligence are still very far from what some people or science-fiction writers mean or mean by AI.

Should we be cautious with current technology?Course. Any technology can be used for good or for worse (think of nuclear power, but also the use of a sharp knife for example).

But since it is not in principle impossible that we could ever make really sensible forms of artificial intelligence, consciousness or life, it is not bad to think about the possible ethical implications now.

In the current applications there are questions about privacy and sent the fact that an algorithm can make decisions that have forms of discrimination in them.
This is because you have to give a self-learning algorithm examples to learn a certain 鈧?虄goal .

For example, if you want to learn to recognize clothing on Foto s with AI (convolutional network), then it needs examples of that clothing.When we made Zo n algorithm and then test it, it could pretty well guess whether the person wore a skirt or trousers, and whether the top part was a jacket, sweater, shirt or costume vest. However, if we gave a photo with a costume vest that was open, the algorithm didn’t recognise it. Why? We had not given a single photo with an open costume vest in our examples. So We had a 鈧?虄discrimination of costume vests. But it could also be that if you were to offer a man with a skirt, the algorithm would not recognise it either, because it had only seen women with a skirt as an example.

This is because Zo N algorithm uses all the bandages that are in the examples to decide which garment it is. Also links that we did not immediately realise that they were in the example dates.
Almost all examples of such 鈧?虄discriminerende consequences that you sometimes read in articles are of that nature: the algorithm learns based on what you offer.

For clothing This does not immediately give problems, but if it does not have a vest, but about skin color or sex (for applications or identification of criminals), then you have a problem.

Therefore, it is important to handle this carefully: Test well what your algorithm has learned, and if there is discrimination in it that is inconsistent with our norms and values, then you will have to avoid that by giving better examples to the algorithm at Learning, or you need to adjust the outcome so that those effects are filtered out.

P.S.: Possibly an interesting overview article about the current state of play Omtrend AI can be found here (I have written about it):…

Artificial intelligence is a marketing invention.The flag does not cover the load. AI has existed for more than fifty years but was the exclusive playground of academics and other scum:-). They used supercomputers to perform their mathematical algorithms to let machines perform certain things, which man can naturally. The essence is mathematics and algorithms for which supercomputers were needed. Since graphics chips (GPU for games and 3D rendering for movies and stuff) perform similar calculations in parallel, the academics are going to use graphics cards. The trend today is to ASIC, applicatioin specific integrated circuits (chips) specially developed COTS for example, convolutional neural Networks (CNN). While a CPU is suitable for common tasks, a CNN ASIC is hundreds of thousands of times faster than a CPU. That makes today both the Datacenter (cloud) not only more to server CPU s is going to train the AI models. GPU s and FPGA s made their appearance, Amazon for example has Xilinx FPGA s where FaaS (FPGA as a Service) is offered. To the 鈧?艙egde 鈧?(inference) so phones and drones for example, are ASIC s and FPGA fast enough to perform compressed trained AI models real time. For example, DJI drones can recognize objects thanks to the neural network chips that are extremely powerful and efficient (low consumption because battery).

Today we speak about, for example, recognizing and classifying objects, a bus, a car, a boat.This is purely mathematics in which models are trained to recognise these objects, i.e. the models calculate a chance that an object is a boat, a car or a bus. The most genuine outcome is then taken. Mathematics and statistics defined by the person who determines the outcome. The goal is therefore determined by man, not by the machine.

Personally, I think that attention to AI is important because it is going to change society drastically.Unfortunately AI is misrepresented and that leads to wild stories and Doemscenario s. In addition, the fight in AI is dominated by the US and China. Europe as such proposes nothing. If something pops up in the EU it is bought up by the US or China. The importance of this strategic miss of Europe will be evident in the next ten to twenty years. The problem with the EURO was long known and yet it remains unresolved. I am not delusions that this will evolve with AI and smart machines in the same way. I would like to be proud of the EU as an EU citizen, but, apart from feeling superior, nooits is something concrete. Einstein would have formulated it as follows N (do not put my hand in the fire, it is about the content, not the person):

鈧?艗two things are infinite, the universe, and human stupidity.But from the universe I know it’s not quite sure yet. 鈧?p>Yes and No.

The advantages are so much greater than the disadvantages.We can automate an enormous amount of things that were impossible before. In The pessimistic discussions this is sometimes forgotten once in a while.

But I certainly see problems with 鈧?虄Ai (although it is not always easy to limit this notion).Most 鈧?虄Ai today is utterly innocent (again: to what extent you can talk about AI today). However, I can see possible problems, especially in the hands of (certain) governments.

鈧?艗our system shows that you have a 200% higher chance of being a drug user, so we perform a preventive check at your home 鈧?These kinds of things really do belong to the possibilities and we already see a foreshadow here today. If we know how to avoid these things (which is absolutely not evident), then we are good.

Leave a Reply