Industrial Morality – The New Emergence

I was speaking to a friend at a manufacturing show in October. He mentioned that his young son wanted to invest in the stock market, with high hopes of picking the next Amazon or Microsoft in the early years. Yet, his son commented “Dad, your generation has already invented all the new stuff.”  Obviously, the level of technological growth from the early 1960 space program to present has been staggering. But have we really hit the pinnacle? If not, what’s next? I have some ideas.

Artificial Intelligence

I believe the next big move will be in the area of artificial intelligence. Everything from simplistic choices of alternatives to complex decisions. Several companies are studying autonomous vehicles. Surely, applications in manufacturing will expand as the technology matures. Today, factories are finding ways to automate repetitive jobs using everything from robots to custom designed systems. But currently all these systems are “amoral.”

They operate based on a predetermined program and do not make “intelligent or value decisions.” As the technology expands, will the factory of the future employ machines equipped with artificial intelligence that balance ethical decisions? Already, there are communities expressing ethical concerns about self-driving automobiles. How will the automobile choose between two less than optimal decisions that could result in property damage of loss of life?

As a professional engineer registered in several states, some licensure boards require annual ethics training. Ethics is defined as a code of conduct that drives decisions. In the case of a professional engineer, our most uniform code of ethics is defined by the National Society of Professional Engineers. Morality is often confused with ethics. Ethical decisions are based on a set of external rules and code. Morals are driven by an individual’s own internal convictions of right and wrong.

This is not meant to imply that ethical decisions are always black or white, particularly when in differing context.  Since morality is driven by an individual’s internal convictions, a moral decision will generally be unwavering unless that individual’s beliefs change. So, when it comes to artificial intelligence in the factory, how should/will the machine make choices? Will it be possible for a machine to make a moral choice, or will the machine simply adhere to a code of ethics? What happens when the machine context changes and how will it resolve the ethical dilemma that is not a clear cut yes or no decision?

As technology advances, it is perfectly reasonable to assume that machines will be called upon to perform more operations that are currently performed by humans. Artificial intelligence has already incorporated “machine learning” technologies.  As machines learn and perfect their operations, it’s not far-fetched to believe the next phase will be for machines to make “decisions.”  There are also jobs and routines that will be enabled by machine technology that are not practical or possible today.

Perhaps ethical and moral dilemmas will not be as prevalent in the factory as they might be in other industries (like health care or transportation).  As machines are utilized in this way and machines begin to make “choices,” developers will need to define how to code this intelligence and define the ethic or intrinsic morality for the system. If this level of artificial intelligence comes to fruition, who will be ultimately accountable for the actions of the machine?

Artificial Intelligence

Sometimes, I think we all can get out “ahead of our skis” and find ourselves in situations that are bigger than we can handle.  There have been many occasions where I feel like developers got ahead of themselves with the internet. As an example, consider internet security, especially when it comes to protection of personal information. We have heard stories of multiple major data breeches at commercial and governmental levels. If we struggle to control personal data, how will we control and trust machines to be non-corruptible and that the “decisions” will be correct and cannot be manipulated?

On one hand, trusting a person driving the opposite side of the highway to stay in their lane is a leap of faith. Yet, when an automated, artificial intelligent system fails, can we trust that all other systems will do the right thing? Will the system choose to protect property over people?  Will operators be able to override the system or will we see 2001: A Space Odyssey lived out and the system makes a decision based on programming where will we hear those memorable words … “I’m sorry Dave, but I can’t do that” as the system works to it’s learned program?

This could be where we are heading. I’m glad there are people smarter than me that have chosen to take this on. To my friend’s son, I would say there are a lot of opportunities for the next technology jump. Perhaps he will be one of those smart individuals. Hopefully, those who develop this system will be the best and brightest we can find and will have the right moral compass and defined ethics.

 

About the Author: Paul V. Kumler, P.E. is president of KTM Solutions, an engineering company that services the aerospace and large scale manufacturing industries. In addition to aero structures engineering services, KTM Solutions designs and builds tooling supporting a broad clientele and various industries. The company is headquartered in Greer, South Carolina with remote offices in Charleston, South Carolina. Mr. Kumler serves in several volunteer roles including the SC Aerospace Advisory Board. Mr. Kumler, a professional engineer, is licensed in Louisiana, South Carolina, Texas and Washington. He is married to Ginger A. Kumler and has two grown children and two grand children.

 

 

Be the first to comment on "Industrial Morality – The New Emergence"

Leave a comment