Is Elon Musk right to warn us against artificial intelligence (AI) and robot doom?


I get it back and forth from this kind of scare tactics.

What is AI right now?Statistics. Mathematics. Data in and desired result. In this case, learning is an overstatement. Great many calculations to do something the brain is preparing in a moment. OpenAI recently made a particular model unavailable because it was too dangerous. On Quora, the English variant, I once looked at what the men’s researchers have done. It already starts with the dataset. They have extracted from Reddit a lot of things that have at least “3 karma”. The equivalent of viral cute kittens videos. They then investigated Brexit, Miley Cyrus and some nonsense. Brexit is a discussion that is very politically loaded where both sides have arguments but where no concluise of true or false can be determined. Both sides have relevant arguments and ridiculous arguments. Miley Cyrus that is purely marketing and sewer journalism. The conclusion of their GPT-2 model is that the model is very good in constructing (predicting words) of a story that is not always true. Well tired, they have a model trained with just some text and there it turns out to be some text. Of course, the syntax is good, so successful experiment! Asjemenou. They created a journalist/blogger/Opinion maker! Deeply sad when you see how that is spread again as “soon the AI makes us all finished”.

Nothing is less true.The press lives from exaggerated and out of context-torn Salix who needs to scare people. The press has every interest in conflict. CNN has become big with war and 9/11. It guarantees gigantic viewing figures and a huge number of website visits, so the advertisements are worth a lot. And that is the last thing. As an observer, you simply become a snepelzot of all the idlers who just shoot in a cramp on Twitter and Facebook.

There is nothing in terms of intelligence.Zero. Nada. Zilch. Nothing. The different types of “learning”, escorted, unguided, through rewards or whatever, it’s math. Really learning is not. Recognizing objects, called classifying, is called CNN (convolutional neural network). A convolution is a measure of comparability (well, AI is now an AML English terminology so is difficult to grasp in Dutch). The model gets a set of images of cats (a reference kit) and that “learns” the model to classify similar images. It is about identifying an object in an image (a certain region in the image) and a probabilities is linked to it, so 88% chance it is a cat. Again, it’s mathematics, more specifically statistic. Chances are it’s a cat. If a man looks, he knows if it is a cat or not. In poorly very exceptional cases, a person doubts whether it is a cat or not. But an AI model can only paste a certain percentage on the object. There is virtually never 100% certainty.

Do we have a robot in the house that covers the beds?Makes the house completely clean without the man having to help the robot? Washing and folding clothes? One robot who knows every question and know the right answer or can make phone calls for us (that of Google is still a question mark, from that something unexpectedly happens, it runs wrong)? In the morning the breakfast is ready by operating different appliances? Go to the bakery and return with a loaf of bread? The garden provides (lawn mowing, slag fighting, weed removing and harvesting, watering) and where the garden is not a rectangular wooden bucket?

The answer is no, btw.

Killer robots?Yes, drones with remote control. Standalone killer robots? No.

If you can really make a robot independent (which is still decades far away), then I think it is naive to believe that a kill switch in software or in hardware is something that can stop an intelligence.It is an advertio in terms.

What ASL I say is that instead of having this nonsense, we look around us and determine how it is set up with our overtake?Privacy anyone? Consumer rights anyone? Deals with Iran because it’s such a cool pears that the westerners like to see? Deals with other dictator countries to ward off certain lucky seekers (depositing money and a few be greetings that they don’t buy weapons with it)? Or have a look at the enormous decline in technology in Europe? The US and China are competing for the crown. The EU is nowhere.

First, worry about the klojos that now make up the service.Those are duziendmaal worse than an ascribed AI army that next year hunts all people over the blade. For example, the Matrix is really not about the killer robots, it’s about the way everyone lives in a kind of virtual world (Disney World) and especially does not look for facts and truths because they are very uncomfortable. To the point that someone who chose to wake up (red pill huh) finally chooses anyway to go back to the Matrix. We are all fans of Neo, of Luke Skywalker and of CPT Janeway. But in the end you are the one who works and continues to work for Darth Vader and the Matrix. Do you only read national newspapers? Do you read internationally but only the one who is your own opinion? Well, then I have news for you, Mr. or Mrs. 99.99%.

You cannot blame anyone for being ignorant of the world.You can blame someone that ie never bothered to be well informed.

That Elon Musk warns us he is very similar. He believes in a pessimistic scenario and he wants to watch us.Seems to me very good. Whether he is really right, I don’t expect that. We are not careless and man knows how to tame technology well. I think we’ll go wrong once, but not in an apocalyptic way as he predicts. We will learn from that and then do better.

A warning certainly can’t hurt.But that does not mean that we should not go on that side.

Mechanization once changed the muscular strength of man and animal, in agriculture.There were many dangers attached to it. Safety requirements, courses, quality criteria, all these things came up to overcome the disadvantages of being able to reap the benefits. And that yielded a huge amount of food (whole mountains and pee too much)

That goes with every new technology.

SO that’s not a lot different from AI.Good quality requirements, monitoring, courses and awareness-raising of users and the entire audience, which will ensure that we can use the benefits safely. And that future has already begun.

As in the 19th century it turned out that from longitudinal trains the cows did not give sour milk, and so we were given a tremendous potential in transport, so these steps will be taken just as well and carefully.To get it all better. Eventually. With trial and error. As we are also going to find the futuristic energy source of nuclear power plants now old-fashioned and unsafe, history corrects itself.

Indeed, a warning is in place.Artificial intelligence is used to learn. But learning is a complex given. I’ve done a lot with speech recognition, which was an old form of AI (Hidden Markov) model based. Where we learned to recognize the computer (supervised learning) words. That learning went well to a certain point, then it was difficult to improve or even stronger you could also train the algorithm broken, so that the recognition became worse.

If you would like to learn a computer to recognize a cat and sit consciously or unknowingly pictures of a dog?What do you love about it? But supervised learning is a way of guaranteeing the end result and delivering better results than humans, the computer is not getting tired!

Man distinguishes itself from the machine by having a moral (at least the majority).Also a moral you could learn, but because a computer has no feeling there is also no ratio or feeling. Perhaps a moral program should be compulsory as a module? On the other hand, humanity has become so partly because of all the waxes of the past, pluralism and diversity has brought us so far, so we must also beware of the clone effect (with the risk that we like lemmings at once massively off a cliff Dumping into the sea).

In addition, we know the recent example of AI and social media being artificially discriminated against, and Hitler was seen as a cool peer.

In short, much to learn and great to see what the possibilities are, but handle with care.

In short, still a long way to go, I would at least make sure there is a stop button on all AI based equipment, so indeed I agree with Elon.

Maybe it would be good if Elon Musk would warn humanity against people like Elon Musk.

Yes and especially against killer robots

Leave a Reply