Not the Invasion They Warned Us About: TikTok and the Continued Erosion of Online Privacy
Category
News, Privacy
Risk Level
Gone are the days where the term “invasion” was, in most cases, jovially preceded by the word “Alien” and a subsequent discussion on the likelihood of extraterrestrial life existing in the cosmos. Instead, the likely top answer if 100 people were surveyed with the question “what’s a type of invasion you fear most?”, as comedically framed by Steve Harvey on Family Feud, is now likely to elicit a less than comedic response. “That would be ‘Privacy’, Steve” the contestant answers as the top ranked placard flips over on the board. No amount of perfectly timed jokes can replace the harsh realization that washes over the soundstage.
“Wait, is that the reality?”
It’s a harsh realization we are dealing with more by the day as social media, and a litany of applications and hosted software products invade our daily lives, and in the process, our privacy. Recently as many of our readers are aware, it became widely reported that the massively popular social media application TikTok had, seemingly out of nowhere, updated their privacy policy to disclose that TikTok may be collecting “faceprint” and “voiceprint” biometric data from the content uploaded by users of its platform. Before this change, TikTok was already an invasive application, collecting geographic information, spatial data from content, usage and interaction data, IP addresses, device types / versions / screen resolutions / advertising identifiers, browser types / versions, personal information including name, email and birthdate, and if linked to your phone’s contacts or other social media services your contacts, friends list, or connections. Now TikTok wants the ability to put an actual face and voice to those other data sets, and it’s… dangerous.
“I’m a TikTok user - what does it all mean?”
For one, it means where TikTok is legally within their right to now collect such biometric data in states that permit it, which is virtually all states at the time of this writing, they will. TikTok does vaguely limit permissionless collection where required to do so:
However, few states have passed legislation regarding biometric data collection with the exception of California, Illinois, New York, Texas, and Washington state who now require explicit permission from the data subject before such collection. While TikTok maintains they will ensure notice is provided and permission is obtained prior to collecting these biometric data sets where applicable, they have yet to outline how permission will be sought, or whether such permission can be revoked by the end user (there have been no updates to the TikTok’s terms to reflect a blanket “use of services constitutes permission” nor would that necessarily meet the mark for cases where explicit rather than implicit permission is required).
Additionally, TikTok, like most social media / third-party hosted apps, has not clearly and explicitly identified how these biometric data sets will be specifically used. Instead referring to their “How we use your information” section of the Privacy Policy, which provides purposely generalized definitions that are broad enough to fit virtually any use case within. Based on what is documented within that section, it is assumed that the full gamut of data sets are used in support of product improvements and feature road mapping, targeted marketing, advertising, and various other innocuous data analytics use cases, however, by broadly defining the use cases there is no real telling how the biometric data sets themselves may be used. It leaves a very grey area for TikTok to operate, likely purposely to provide a flexible, legally defensible position that does not back them into a corner. By not explicitly stating the purpose for collecting these biometric data sets specifically, it’s impossible to know just what TikTok is doing with them.
“Ok, but should I even care?”
Absolutely! Beyond the lack of understanding of what TikTok may be DOING with your biometric data, the mere fact that TikTok is collecting this data and storing it within their IT environment puts you at risk. Should TikTok be breached, like many companies before them, your data can be obtained by hackers and scammers. And with enough of a digital fingerprint created they can do some pretty terrifying things. We’re not talking merely identity theft, or targeted spear phishing campaigns, but also more advanced and sinister attacks such as the creation of deep fakes to facilitate impersonation, reputational sabotage, or using voiceprints obtained to abuse systems that authenticate with similar voice print technology such as mobile providers responsible for administering mobile accounts and SIM cards (ie. social engineer a mobile provider employee to facilitate a SIM swap).
A hacker or scammer could also use faceprint and voiceprint data in a plethora of ways to impersonate you, and to create very realistic deep fakes or digital personas - combined with the other information obtainable from TikTok and some rudimentary AI, create a not-so-easily discerned, fake digital version of any user. Digital fakes that could at least pass an initial eye test, without much further scrutiny, and that is alarming to say the least.
“How do we fix it?”
With the ever-growing footprint of personal, biometric, and other digital data websites and apps are requiring from their users, there is a critical need for more quickly evolving regulation and legislation aimed at preserving privacy and sovereignty of the individual. Legislation that curbs not only how data can be used, or whether permission is required, but also what data should be collectible in the first place. At the end of the day these companies are commoditizing their users’ data to their sole benefit, while the expense and risks of exposure are wholly outsourced and subsidized by the user. The current state of requiring users to actively ensure their own privacy (a basic human right) is unscalable and unsustainable. Users must be better protected from the outset, and the only way to ensure that is to impose significant restrictions on data collection and usage by companies seeking to monetize or use it to their asymmetric benefit in any way.
At Hive Systems, we believe privacy should be the default, not actively maintained by users through an explicit opt-out. Data collection should always remain explicitly opt-in except for a well-defined bare minimum data set required to deliver the service and the service alone with explicitly stated usage details for such data required. Learn more about data security and privacy with our experts who can help implement your own data privacy policy and subsequent data management and security processes.
Stay ahead of the latest privacy risks by subscribing to the ACT Digest, where we’ll tell you about what’s going on in the cybersecurity world, and how you can protect yourself, your friends, your family, and your organization.