During the past six years, Volkswagen was cheating on the emissions testing for its diesel cars because the cars’ computers were able to detect when they were being tested, and temporarily alter how their engines worked so they looked much cleaner than they actually were and when they weren’t being tested, the car produced 40 times more the pollutants.
Computers allow people new ways to cheat, because the cheating is at the embedded software and the malicious actions only happen when the expected conditions are presented, otherwise continue with the normal operation mode. Because the software is “smart” in ways that normal objects are not, the cheating can be subtle and harder to detect.
The Internet of Things is supposed coming and many industries are moving to add embedded computers to their devices, and that will bring with it new opportunities for manufacturers to cheat; for example, light bulbs could appear more energy efficient than they are, temperature sensors could do appear that food has been stored at safer temperatures than it has been, voting machines could appear to work perfectly, except during the presidential polls date, when they undetectably switch a few percent of votes from one party’s candidates to another’s, electricity meters add some cents more to the measure, etc.
This cheating embedded software won’t be solved through as standards computer security procedures, because they are designed to prevent outside hackers from breaking into your computers and networks. The car’s analogue example would be security software that prevented an owner from tweaking his own engine to run faster but in the process emit more pollutants; the real task is contend malfeasance programmed in at the design stage.
Software verification has two parts, transparency and oversight. Where transparency means making the source code available for analysis by independent investigators. The need for this is obvious because it’s much easier to hide cheating software if a manufacturer can hide the code. Oversight means that analysis can’t be limited to a once-every-few-years government test; is necessary private analysis as well.
Both transparency and oversight are not accomplished in the right way at the software world because companies routinely fight making their code public and attempt to muzzle independent security researchers who find problems, citing the proprietary nature of the software; really it’s a fair complaint, but the public interests of accuracy and safety need to be more that business interests.
Proprietary software is widely being used in critical applications like voting machines, medical devices, breathalyzers, electric power distribution, etc. And as I said at my other article “Artificial Intelligence, is really so dangerous?” we’re ceding more control of our life to “dumb Intelligent Systems” with poor quality control and a deficient testing and homologation procedures; but overall software quality is so bad that products ship with thousands of programming mistakes and most of them don’t affect normal operations, this is why your software generally works just fine; but some of them do and is why your software occasionally fails, and needs constant updates. By making cheating software appear to be a programming mistake, the cheating looks like an accident and unfortunately this type of deniable cheating is easier than people think.
Malfeasance by software is easier to commit and harder to prove because fewer people need to know about the conspiracy, and it can be done in advance and far to the testing time and homologation; and, if the “cheatware” remains undetected for long enough, it could easily be the case that no one in the company know that it’s there.
From the companies’ point of view and also for me as software developer, I can understand that software algorithms are a time and money investment in development and research, and can be the most precious (or the only one) asset of the company, and publish the source code is like request to Coca-Cola or Pepsi reveal his soda recipes; also I know that often the project’s schedule and money investments are against to perform a meticulous software auditory.
In one side we have that transparency and oversight is the only way verify that software is doing the job as expected and at the other side the right to protect intellectual property, and the midway between both situations the more viable option would be that companies follow procedures to receive certifications by external entities for his software, and a contract to provide a non-disclosure agreement about the source code but allow a full release of the tests, procedures and results by the external certification entity.
In conclusion we need a better verification of the software that controls our lives in this modern world, because can be the case when our lives will be depending of them.
Julian Bolivar-Galeno is an Information and Communications Technologies (ICT) Architect whose expertise is in telecommunications, security and embedded systems. He works in BolivarTech focused on decision making, leadership, management and execution of projects oriented to develop strong security algorithms, artificial intelligence (AI) research and its applicability to smart solutions at mobile and embedded technologies, always producing resilient and innovative applications.