Connect with us

Artificial Intelligence

10 Ethical Issues Of Artificial Intelligence And Robotics



10 Ethical Issues Of Artificial Intelligence And Robotics

AI and robotics are going to shape our future. Next there are 10 issues that professionals and researchers need to address in order to desing intelligent systems that help humanity.

Misinformation and Fake News

The flow of misinformation together with our natural inability of perceiving reality based on evidence (a phenomenon called confirmation bias) is a threat to having an informed democracy. Russian hackers influencing the US elections, Brexit campaign and Catalonia crisis are examples of how social media can massively spread misinformation and fake news. Recent advances in computer vision make possible to completely fake a video of President Obama. It is an open question how institutions are going to address this threat.

Job Displacement

The scientific revolution in the 18th century and the industrial revolution in the 19th marked a complete change in society. For thousands of years before it, economic growth was practically negligible. During the 19th and 20th century, the level of society development was remarkable.

In the 19th century there was a group in the UK called the Luddites, that protested against

the automatization of the textile industry by destroying machinery. Since then, a recurrent fear has been that automation and technological advance will produce mass unemployment. Even though that prediction has proven to be incorrect, it is a fact that there has been a painful job displacement. PwC estimates that by 2030 around 30% of the jobs will be automatized. Under these circumstances, governments and companies should provide workers with tools to adapt to these changes, by supporting education and relocating jobs.


The importance of privacy is all over the news lately due to the Cambridge Analytica scandal, where 87 million Facebook profiles were stolen and used to influence the US election and Brexit campaign. Privacy is a human right and should be protected against misuse.


Cibersecurity is one of the biggest concerns of governments and companies, specially banks. A robbery of $1 billion was reported in banks from Russia, Europe and China in 2015 and half a billion was stolen from the cryptocurrency exchange Coincheck. AI can help protect against these vulnerabilities, but it can be also used by hackers to find new sophisticated ways of attacking institutions.

Mistakes of AI

Last month, a woman was hit and killed overnight by an Uber self-driving car when walking across the street in the US. As any other technological system, AI systems can make mistakes. It is a common misconception that robots are infalible and infinitely precise. A common way for some professors in my old lab to say hello to their PhD students of robotics was, what have you broken?

Military Robots

There is an ongoing debate about controlling the development of military robots and banning autonomous weapons. An open letter, from 25.000 researchers and professionals of AI, asks to ban autonomous weapons without human supervision to avoid an international military AI arms race.

Algorithmic Bias

We have to work hard to avoid bias and discrimination when developing AI algorithms. An specific example was face detection using Haar Cascades, that has a lower detection rate in dark-skinned people than in light-skinned people. This happens because the algorithm is designed to find a double T pattern in a grayscale image of the person’s face, corresponding to the eyebrows, nose and mouth. This pattern is more difficult to find in a person with dark skin.

Haar Cascades are not racists, how can an algorithm be?, but many people can feel insulted. When programing these algorithms, we need to be mindful of their limitations,  transparent with users by explaining how the algorithm works or use a more effective technique with dark-skinned people.


Existing laws have not been developed with AI in mind, however, that does not mean that AI-based product and services are unregulated. As suggested by Brad Smith, Chief Legal Officer at Microsoft,

“Governments must balance support for innovation with the need to ensure consumer safety by holding the makers of AI systems responsible for harm caused by unreasonable practices”. Policymakers, researchers and professionals should work together to make sure that AI and robotics provide a benefit to humanity.


Some tech leaders have shown concerns about the possible threats of AI, one example was Elon Musk, who claimed that AI is more risky than North Korea. These words generated a strong criticism from the scientific community.

Superintelligence is generally regarded to a state where a robot starts to recursively improve itself, reaching a point that easily surpass the most intelligent human by orders of magnitude. Some enthusiast, like Ray Kurthweil, believes that by 2045 we will reach that state. Others, like François Chollet, believes that it is impossible.

Robot Rights

Should robots have rights? If we think of a robot as an advanced washing machine, then no. However, if robots were able to have emotions or feelings, then the answer is not that clear. One of the pioneers of AI, Marvin Minsky, believed that there is no fundamental difference between humans and machines, and that artificial general intelligence is not possible without robots having self-concious emotions.

A suggestion in the debate around robot rights is that robots should be granted the right to exist and perform their mission, but this should be linked to the duty of serving humans. There is a lot of controversy around this area. Meanwhile, in 2017, Sophia the robot was granted the citizenship of Saudi Arabia, and even Will Smith flirted with her.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

China plans new era of sea power with unmanned AI submarines



China is planning to upgrade its naval power with unmanned AI submarines that aim to provide an edge over the fleets of their global counterparts.

A report by the South China Post on Sunday revealed Beijing’s plans to build the automated subs by the early 2020s in response to unmanned weapons being developed in the US.

The subs will be able to patrol areas in the South China Sea and Pacific Ocean that are home to disputed military bases.

While the expected cost of the submarines has not been disclosed, they’re likely to be cheaper than conventional submarines as they do not require life-supporting apparatus for humans. However, without a human crew, they’ll also need to be resilient enough to be at sea without onboard repairs possible.

The XLUUVs (Extra-Large Unmanned Underwater Vehicles) are much bigger than current underwater vehicles, will be able to dock as any other conventional submarine, and will carry a large amount of weaponry and equipment.

As a last resort, they could be used in automated ‘suicide’ attacks that scuttle the vessel but causes damage to an enemy’s ship that may or not be manned.

“The AI has no soul. It is perfect for this kind of job,” said Lin Yang, Chief Scientist on the project. “[An AI sub] can be instructed to take down a nuclear-powered submarine or other high-value targets. It can even perform a kamikaze strike.”

The AI element of the submarines will need to carry out many tasks including navigating often unpredictable waters, following patrol routes, identifying friendly or hostile ships, and making appropriate decisions.

It’s the decision-making that will cause the most concern as the AI is being designed not to seek input during the course of a mission.

The international norm being promoted by AI researchers is that any weaponised AI system will require human input to ultimately make a decision. Any news that China is following a policy of creating weaponised AIs that do not require human input should be of global concern.

Continue Reading

Artificial Intelligence

AI robots will solve underwater infrastructure damage checks



Robots will be paired with a versatile AI that can quickly adapt to unpredictable conditions when examining underwater infrastructure.

Some of a nation’s most vital infrastructure hides beneath the water. The difficulty in accessing most of it, however, makes important damage checks infrequent.

Sending humans down requires significant training and can take several weeks to recover due to the often extreme depths. There are far more underwater structures than skilled divers to inspect them.

Robots have been designed to carry out some of these dangerous tasks. The problem is until now they’ve lacked the smarts to deal with the unpredictable and rapidly-changing nature of underwater conditions.

Researchers from Stevens Institute of Technology are working on algorithms which enable these underwater robots to check and protect infrastructure.

Their work is led by Brendan Englot, Professor of Mechanical Engineering at Stevens.

“There are so many difficult disturbances pushing the robot around, and there is often very poor visibility, making it hard to give a vehicle underwater the same situational awareness that a person would have just walking around on the ground or being up in the air,” says Englot.

Englot and his team are using reinforcement learning for training algorithms. Rather than use an exact mathematical model, the robot performs actions and observes whether it helps to attain its goal.

Through a case of trial-and-error, the algorithm is updated with the collected data to figure out the best ways to deal with changing underwater conditions. This will enable the robot to successfully manoeuvre and navigate even in previously unmapped areas.

A robot was recently sent on a mission to map a pier in Manhattan.

“We didn’t have a prior model of that pier,” says Englot. “We were able to just send our robot down and it was able to come back and successfully locate itself throughout the whole mission.”

The robots use sonar for data, widely regarded as the most reliable for undersea navigation. It works similar to a dolphin’s echolocation by measuring how long it takes for high-frequency chirps to bounce off nearby structures.

A pitfall with this approach is you’re only going to be able to receive imagery similar to a grayscale medical ultrasound. Englot and his team believe that once a structure has been mapped out, a second pass by the robot could use a camera for a high-resolution image of critical areas.

For now, it’s early days but Englot’s project is an example of how AI is enabling a new era for robotics that improves efficiency while reducing the risks to humans.

Continue Reading

Artificial Intelligence

Technology Trends That Will Dominate 2018



  1. AI permeation. Artificial intelligence (AI), largely manifesting through machine learning algorithms, isn’t just getting better. It isn’t just getting more funding. It’s being incorporated into a more diverse range of applications. Rather than focusing on one goal, like mastering a game or communicating with humans, AI is starting to make an appearance in almost every new platform, app, or device, and that trend is only going to accelerate in 2018. We’re not at techno-pocalypse levels (and AI may never be sophisticated enough for us to reach that point), but by the end of 2018, AI will become even more of a mainstay in all forms of technology.

  2. Digital centralization. Over the past decade, we’ve seen the debut of many different types of devices, including smartphones, tablets, smart TVs, and dozens of other “smart” appliances. We’ve also come to rely on lots of individual apps in our daily lives, including those for navigation to even changing the temperature of our house. Consumers are craving centralization; a convenient way to manage everything from as few devices and central locations as possible. Smart speakers are a good step in the right direction, but 2018 may influence the rise of something even better.

  3. 5G preparation. Though tech timelines rarely play out the way we think, it’s possible that we could have a 5G network in place—with 5G phones—by the end of 2019. 5G internet has the potential to be almost 10 times faster than 4G, making it even better than most home internet services. Accordingly, it has the potential to revolutionize how consumers use internet and how developers think about apps and streaming content. 2018, then, is going to be a year of massive preparation for engineers, developers, and consumers, as they gear up for a new generation of internet.

  4. Data overload. By now, every company in the world has realized the awesome power and commoditization of consumer data, and in 2018, data collection is going to become an even higher priority. With consumers talking to smart speakers throughout their day, and relying on digital devices for most of their daily tasks, companies will soon have access to—and start using—practically unlimited amounts of personal data. This has many implications, including reduced privacy, more personalized ads, and possibly more positive outcomes, such as better predictive algorithms in healthcare.


Continue Reading