may be the short answer, if hackers become able to duplicate vocal-trickery
techniques demonstrated in experiments by university researchers in the
Midwest. Alexa, of course is the world’s most popular voice interface,
currently controlling Amazon’s 100 million virtual-assistant smart speakers
(and 20,000 other Alexa-compatible devices). ‘Skills’ are applications or
functions performed by Alexa at the user’s command; they can come from Amazon
or from third-party developers. Currently, users can choose from around 60,000
Alexa skills, including more than 1,000 which are designed for business use.
‘Squatting’ is the idea that a speaker actively listens for a phonetically
similar command, which when issued, hijacks the session and redirects the user
to a malicious application without their knowledge.
‘Skill squatting,’ also
called ‘voice squatting’ and ‘voice masquerading’ are the aural equivalent of
the old ‘typo squatting’ hack attack, which took inattentive users clicking on
a ‘facebook.com’ or ‘youtube.com’ link to an infected or phishing-style
landing page. The real threat of skill squatting is difficult to gauge, and
journalists reporting on the subject hastily point out that recent experiments
only demonstrate a proof-of-concept or potential for attack. But companies’
dash toward digital transformation is well underway. And as (ever cheaper and
accessible) voice technology moves deeper into the transformed enterprise, it’s
easy to imagine the security risks from smart-assistants accelerating
Considering or already using smart speakers at work? Contact
the cybersecurity experts at TeamLogic IT today for an objective opinion about
threat assessment and breach prevention.