Skip to Main Content

Exciting news! We're moving to a new website over the next few months. As you navigate, you may see a mix of new and old pages.

Those who know me will tell you that I have a love/hate relationship with IoT of any kind. The love part is easy: devices that, added to our daily routines and lifestyles, bring an almost futuristic experience to our collective lives. Then there is the hate side of the argument: devices that are rarely—if ever—built with inherent security, ensuring that cybercriminals everywhere will view them as a giant target.

Perhaps due to this side of my IoT relationship, I was unsurprised when Google recently discovered a security flaw in its Alexa smart-home speaker product. In a nutshell, the flaw enabled criminals to gain access to private user information, along with voice files and more. And, to make it worse, it may have allowed them to install new so-called “skills” onto home devices to enable additional information to be captured.

The frustrating part is that this is actually commonplace. Though this may be the first major flaw found in Google’s device, it also begs the question as to what Apple, Sonos, Nokia, and so many more are potentially putting at risk for their users.

So, why is theft of voice files a scary prospect? It seems that in the past few years—as technology has continued to evolve at exponentially growing speeds—some science fiction ideas from decades ago have come to fruition right in front of us. Part of this is the technology known as deep fakes: audio and visual technologies that can replicate, with almost perfect accuracy, people’s voices and faces to the point the fakes are beyond detectable by the human eye or ear.

In fact, this technology has already been used successfully in movies to replicate actors who are no longer with us. Now, think of that on a bigger scale. Politicians, business executives and others could all be exploited to cause damage anywhere in the world. But, Mission Impossible plots aside, the fact that cybercriminals are stealing voice data from regular folks’ smart-home devices means the chances of that being used for profit is more than just likely.

These exploited files could be used to sign into accounts automatically, change delivery addresses for online purchases—even have new credit cards issued and sent to any address.

As mentioned, IoT in general can be dangerous. For example, there have been instances where security cameras have been accessed, giving criminals HD video images of individuals, and these can also be exploited. If this is starting to sound far-fetched (I’m sure you know where I’m going with this), please hear me out.

Once your user names and passwords have been captured, then criminals know where you live online. From social media accounts to online shopping and so much more—your digital footprint is already creating a detailed picture of who you are. Let’s not forget, it was the theft of user names and passwords that caused countless Disney+ accounts to be compromised upon its release.

Now, pair that data that with voice and video files. If your face and voice can be exploited, then access to multiple devices can occur. For example, facial and voice recognition tools could be tricked, or AI could be used to make it seem that you took part in a phone conversation that you never participated in—think of conversations with banks for lines of credit, loans, and so much more.

In the end, I’m not saying don’t use IoT devices. What I do suggest, however, is that you educate yourself and your company on what these devices represent in the grand scheme of things. Knowledge is always the first step in thwarting cybercriminals—as they say, luck favors the prepared.