Alexa and Google Home Abused to Eavesdrop and Phish Passwords

Amazon- and Google-approved apps turned both voice-controlled devices into “smart spies.”

Altered image shows human ears sprouting from Amazon device.

By now, the privacy threats posed by Amazon Alexa and Google Home are common knowledge. Workers for both companies routinely listen to audio of users—recordings of which can be kept forever—and the sounds the devices capture can be used in criminal trials.

Now, there’s a new concern: malicious apps developed by third parties and hosted by Amazon or Google. The threat isn’t just theoretical. Whitehat hackers at Germany’s Security Research Labs developed eight apps—four Alexa “skills” and four Google Home “actions”—that all passed Amazon or Google security-vetting processes. The skills or actions posed as simple apps for checking horoscopes, with the exception of one, which masqueraded as a random-number generator. Behind the scenes, these “smart spies,” as the researchers call them, surreptitiously eavesdropped on users and phished for their passwords.

“It was always clear that those voice assistants have privacy implications—with Google and Amazon receiving your speech, and this possibly being triggered on accident sometimes,” Fabian Bräunlein, senior security consultant at SRLabs, told me. “We now show that, not only the manufacturers, but… also hackers can abuse those voice assistants to intrude on someone’s privacy.”

The malicious apps had different names and slightly different ways of working, but they all followed similar flows. A user would say a phrase such as: “Hey Alexa, ask My Lucky Horoscope to give me the horoscope for Taurus” or “OK Google, ask My Lucky Horoscope to give me the horoscope for Taurus.” The eavesdropping apps responded with the requested information while the phishing apps gave a fake error message. Then the apps gave the impression they were no longer running when they, in fact, silently waited for the next phase of the attack.

As the following two videos show, the eavesdropping apps gave the expected responses and then went silent. In one case, an app went silent because the task was completed, and, in another instance, an app went silent because the user gave the command “stop,” which Alexa uses to terminate apps. But the apps quietly logged all conversations within earshot of the device and

sent a copy to a developer-designated server.

The phishing apps follow a slightly different path by responding with an error message that claims the skill or action isn’t available in that user’s country. They then go silent to give the impression the app is no longer running. After about a minute, the apps use a voice that mimics the ones used by Alexa and Google home to falsely claim a device update is available and prompts the user for a password for it to be installed.

SRLabs eventually took down all four apps demoed. More recently, the researchers developed four German-language apps that worked similarly. All eight of them passed inspection by Amazon and Google. The four newer ones were taken down only after the researchers privately reported their results to Amazon and Google. As with most skills and actions, users didn’t need to download anything. Simply saying the proper phrases into a device was enough for the apps to run.

All of the malicious apps used common building blocks to mask their malicious behaviors. The first was exploiting a flaw in both Alexa and Google Home when their text-to-speech engines received instructions to speak the character “�.” (U+D801, dot, space). The unpronounceable sequence caused both devices to remain silent even while the apps were still running. The silence gave the impression the apps had terminated, even when they remained running.

The apps used other tricks to deceive users. In the parlance of voice apps, “Hey Alexa” and “OK Google” are known as “wake” words that activate the devices; “My Lucky Horoscope” is an “invocation” phrase used to start a particular skill or action; “give me the horoscope” is an “intent” that tells the app which function to call; and “taurus” is a “slot” value that acts like a variable. After the apps received initial approval, the SRLabs developers manipulated intents such as “stop” and “start” to give them new functions that caused the apps to listen and log conversations.

Others at SRLabs who worked on the project include security researcher Luise Frerichs and Karsten Nohl, the firm’s chief scientist. In a post documenting the apps, the researchers explained how they developed the Alexa phishing skills:

1. Create a seemingly innocent skill that already contains two intents:
– an intent that is started by “stop” and copies the stop intent
– an intent that is started by a certain, commonly used word and saves the following words as slot values. This intent behaves like the fallback intent.

2. After Amazon’s review, change the first intent to say goodbye, but then keep the session open and extend the eavesdrop time by adding the character sequence “(U+D801, dot, space)” multiple times to the speech prompt.

3. Change the second intent to not react at all

When the user now tries to end the skill, they hear a goodbye message, but the skill keeps running for several more seconds. If the user starts a sentence beginning with the selected word in this time, the intent will save the sentence as slot values and send them to the attacker.

To develop the Google Home eavesdropping actions:

1. Create an Action and submit it for review.

2. After review, change the main intent to end with the Bye earcon sound (by playing a recording using the Speech Synthesis Markup Language (SSML)) and set expectUserResponse to true. This sound is usually understood as signaling that a voice app has finished. After that, add several noInputPrompts consisting only of a short silence, using the SSML element or the unpronounceable Unicode character sequence “�.”

3. Create a second intent that is called whenever an actions.intent.TEXT request is received. This intent outputs a short silence and defines several silent noInputPrompts.

After outputting the requested information and playing the earcon, the Google Home device waits for approximately 9 seconds for speech input. If none is detected, the device “outputs” a short silence and waits again for user input. If no speech is detected within 3 iterations, the Action stops.

When speech input is detected, a second intent is called. This intent only consists of one silent output, again with multiple silent reprompt texts. Every time speech is detected, this Intent is called and the reprompt count is reset.

The hacker receives a full transcript of the user’s subsequent conversations, until there is at least a 30-second break of detected speech. (This can be extended by extending the silence duration, during which the eavesdropping is paused.)

In this state, the Google Home Device will also forward all commands prefixed by “OK Google” (except “stop”) to the hacker. Therefore, the hacker could also use this hack to imitate other applications, man-in-the-middle the user’s interaction with the spoofed Actions, and start believable phishing attacks.

SRLabs privately reported the results of its research to Amazon and Google. In response, both companies removed the apps and said they are changing their approval processes to prevent skills and actions from having similar capabilities in the future. In a statement, Amazon representatives provided the following statement and FAQ (emphasis added for clarity):

Customer trust is important to us, and we conduct security reviews as part of the skill certification process. We quickly blocked the skill in question and put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified.

On the record Q&A:

1) Why is it possible for the skill created by the researchers to get a rough transcript of what a customer says after they said “stop” to the skill?

This is no longer possible for skills being submitted for certification. We have put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified.

2) Why is it possible for SR Labs to prompt skill users to install a fake security update and then ask them to enter a password?

We have put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified. This includes preventing skills from asking customers for their Amazon passwords.

It’s also important that customers know we provide automatic security updates for our devices, and will never ask them to share their password.

Google representatives, meanwhile, wrote:

All Actions on Google are required to follow our developer policies, and we prohibit and remove any Action that violates these policies. We have review processes to detect the type of behavior described in this report, and we removed the Actions that we found from these researchers. We are putting additional mechanisms in place to prevent these issues from occurring in the future.

Google didn’t say what these additional mechanisms are. On background, a representative said company employees are conducting a review of all third-party actions available from Google, and during that time, some may be paused temporarily. Once the review is completed, actions that passed will once again become available.

It’s encouraging that Amazon and Google have removed the apps and are strengthening their review processes to prevent similar apps from becoming available. But the SRLabs’ success raises serious concerns. Google Play has a long history of hosting malicious apps that push sophisticated surveillance malware—in at least one case, researchers said, so that Egypt’s government could spy on its own citizens. Other malicious Google Play apps have stolen users’ cryptocurrency and executed secret payloads. These kinds of apps have routinely slipped through Google’s vetting process for years.

There’s little or no evidence third-party apps are actively threatening Alexa and Google Home users now, but the SRLabs research suggests that possibility is by no means farfetched. I’ve long remained convinced that the risks posed by Alexa, Google Home, and other always-listening apps outweigh their benefits. SRLabs’ Smart Spies research only adds to my belief that these devices shouldn’t be trusted by most people.

DAN GOODIN Dan is the Security Editor at Ars Technica, which he joined in 2012 after working for The Register, the Associated Press, Bloomberg News, and other publications.

State of SMB Security by the Numbers

SMBs still perceive themselves at low risk from cyberthreats – in spite of attack statistics that paint a different picture.

Image Source: Adobe(Pablo Lagarto)

Image Source: Adobe(Pablo Lagarto)

Even as attacks and breaches at small to midsize businesses (SMBs) continue unabated worldwide, these companies still don’t consider themselves at high risk from cyberthreats, reports show.

“Cyberattacks are a global phenomenon — and so is the lack of awareness and preparedness by businesses globally,” says Dr. Larry Ponemon, chairman and founder of The Ponemon Institute. “Every organization, no matter where they are, no matter their size, must make cybersecurity a top priority.”

The fact of the matter is that SMBs don’t prioritize cybersecurity. It’s to their detriment. Here, Dark Reading examines a recent Ponemon report on the state of cybersecurity at SMBs (done in partnership with Keeper Security), along with several others released over the past few months, to get a picture of SMB insecurity by the numbers.

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

How the City of Angels is Tackling Cyber Devilry

A new mobile app makes a cybersecurity threat lab available to more small businesses in Los Angeles.

(Image: likozor via Adobe Stock)

(Image: likozor via Adobe Stock)

Electricity. Water. Law enforcement. These are services companies and individuals expect to receive from municipal governments. The City of Los Angeles is adding another service to the list: cybersecurity intelligence. And some think the project by the City of Angels could be the model for other US cities to emulate in expanding the services they offer to their own citizens.

Since August 2017, the LA Cyber Lab has been providing cybersecurity assistance to small and midsize businesses in the city. By sharing threat information and providing training opportunities, the Cyber Lab has tried to provide smaller organizations with some of the cybersecurity advantages that larger organizations can afford.

In the first two years of the Lab’s operation, it built a standardized platform for accepting information from participating organizations and automating threat analysis reporting to those companies. Hundreds of organizations have participated in the program that Los Angeles Mayor Eric Garcetti, who chairs the Lab’s board of advisers, has said is critical for addressing cybersecurity with the appropriate sense of urgency.

Now the Cyber Lab has expanded its capabilities and mission with the introduction of a mobile platform that can be accessed by businesses and individuals.

“We’ve got a mobile platform that citizens can log onto, can become members [of the LA Cyber Lab], and ultimately do things like submit pieces of mail that might be suspicious and then actually get information back that typically would only be shared more in a corporate setting,” says Wendi Whitmore, vice president of X-Force Threat Intelligence at IBM Security.

IBM Security is a partner in the Cyber Lab. While there is obviously a financial relationship, Whitmore says each enjoys side benefits from IBM’s participation in other ways. IBM Security provides the analytical platform the lab uses for generating its reports, and Whitmore says the data from Cyber Lab clients enhances the global data set X-Force analysts use in their work.

For the past two years, clients have been able to share internal company data — like login data, internal Web traffic, and user account activity — with the Cyber Lab. In the workflow until last month, Lab analysts would then review the shared data, looking for various indicators of compromise, such as data that shows a compromised user account or phishing links in email messages.

Notice of a compromise would then be sent in an email message — one of a series of email messages sent approximately five times a week. With the new mobile and Web-based system, messages can be forwarded via an app to the lab, which will then notify the client of compromise via the app within a few hours.

All of the analysis and threat indication is provided at no cost to businesses in Los Angeles. In conversations at Black Hat USA 2019, lab management stressed that the lab and its free nature is a recognition of the importance of small businesses in the economy of the city. And that importance is not limited to Los Angeles.

“I think the goal for everyone in this project is it really becomes a great example and a benchmark for other cities to learn from and take on,” Whitmore says. While there are other municipal cybersecurity programs, like New York City’s Cyber NYC, most of these focus on growing the local cybersecurity industry and workforce, not protecting local small businesses.

As threats like ransomware become more devastating for small businesses and small government units, other governments may well look to Los Angeles as a model. The real question may be which governments can afford to offer this particular service to their citizens — and which groups of citizens are willing to pay for the service through their taxes.

Related Content:

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading.

Curtis Franklin Jr.
Curtis Franklin Jr.
Edge Articles

Spam in Your Calendar? Here’s What To Do.

Many spam trends are cyclical: Spammers tend to switch tactics when one method of hijacking your time and attention stops working. But periodically they circle back to old tricks, and few spam trends are as perennial as calendar spam, in which invitations to click on dodgy links show up unbidden in your digital calendar application from AppleGoogle and Microsoft. Here’s a brief primer on what you can do about it.

Image: Reddit

Over the past few weeks, a good number of readers have written in to say they feared their calendar app or email account was hacked after noticing a spammy event had been added to their calendars.

The truth is, all that a spammer needs to add an unwelcome appointment to your calendar is the email address tied to your calendar account. That’s because the calendar applications from Apple, Google and Microsoft are set by default to accept calendar invites from anyone.

Calendar invites from spammers run the gamut from ads for porn or pharmacy sites, to claims of an unexpected financial windfall or “free” items of value, to outright phishing attacks and malware lures. The important thing is that you don’t click on any links embedded in these appointments. And resist the temptation to respond to such invitations by selecting “yes,” “no,” or “maybe,” as doing so may only serve to guarantee you more calendar spam.

Fortunately, the are a few simple steps you can take that should help minimize this nuisance. To stop events from being automatically added to your Google calendar:

-Open the Calendar application, and click the gear icon to get to the Calendar Settings page.
-Under “Event Settings,” change the default setting to “No, only show invitations to which I have responded.”

To prevent events from automatically being added to your Microsoft Outlook calendar, click the gear icon in the upper right corner of Outlook to open the settings menu, and then scroll down and select “View all Outlook settings.” From there:

-Click “Calendar,” then “Events from email.”
-Change the default setting for each type of reservation settings to “Only show event summaries in email.”

For Apple calendar users, log in to your iCloud.com account, and select Calendar.

-Click the gear icon in the lower left corner of the Calendar application, and select “Preferences.”
-Click the “Advanced” tab at the top of the box that appears.
-Change the default setting to “Email to [your email here].”

Making these changes will mean that any events your email provider previously added to your calendar automatically by scanning your inbox for certain types of messages from common events — such as making hotel, dining, plane or train reservations, or paying recurring bills — may no longer be added for you. Spammy calendar invitations may still show up via email; in the event they do, make sure to mark the missives as spam.

by Krebs on Security

 

Fraudsters Deepfake CEO’s Voice to Trick Manager into Transferring $243,000

by  —  in SECURITY

It’s already getting tough to discern real text from fakegenuine video from deepfake. Now, it appears that use of fake voice tech is on the rise too.

That’s according to the Wall Street Journal, which reported the first ever case of AI-based voice fraud — aka vishing (short for “voice phishing”) — that cost a company $243,000.

In a sign that audio deepfakes are becoming eerily accurate, criminals sought the help of commercially available voice-generating AI software to impersonate the boss of a German parent company that owns a UK-based energy firm.

They then tricked the latter’s chief executive into urgently wiring said funds to a Hungarian supplier in an hour, with guarantees that the transfer would be reimbursed immediately.

The company CEO, hearing the familiar slight German accent and voice patterns of his boss, is said to have suspected nothing, the report said.

But not only was the money not reimbursed, the fraudsters posed as the German CEO to ask for another urgent money transfer. This time, however, the British CEO refused to make the payment.

As it turns out, the funds the CEO transferred to Hungary were eventually moved to Mexico and other locations. Authorities are yet to determine the culprits behind the cybercrime operation.

The firm was insured by Euler Hermes Group, which covered the entire cost of the payment. The incident supposedly happened in March, and the names of the company and the parties involved were not disclosed, citing ongoing investigation.

AI-based impersonation attacks are just the beginning of what could be major headaches for businesses and organizations in the future.

In this case, the voice-generation software was able to successfully imitate the German CEO’s voice. But it’s unlikely to remain an isolated case of a crime perpetrated using AI.

On the contrary, they are only bound to increase in frequency if social engineering attacks of this nature prove to be successful.

As the tools to mimic voices become more realistic, so is the likelihood of criminals using them to their advantage. By feigning identities on the phone, it makes it easy for a threat actor to access information that’s otherwise private and exploit it for ulterior motives.

Back in July, Israel National Cyber Directorate issued warning of a “new type of cyber attack” that leverages AI technology to impersonate senior enterprise executives, including instructing employees to perform transactions such as money transfers and other malicious activity on the network.

The fact that an AI-related crime of this precise nature has already claimed its first victim in the wild should be a cause for concern, as it complicates matters for businesses that are ill-equipped to detect them.

Last year, Pindrop — a cybersecurity firm that designs anti-fraud voice software — reported a 350 percent jump in voice fraud from 2013 through 2017, with 1 in 638 calls reported to be synthetically created.

To safeguard companies from the economic and reputational fallout, it’s crucial that “voice” instructions are verified via a follow-up email or other alternative means.

The rise of AI-based tools has its upsides and downsides. On one hand, it gives room for exploration and creativity. On the other hand, it also allows for crime, deception, and nearly (unfortunately) damn competent fraud.

The Rise of “Bulletproof” Residential Networks

The Rise of “Bulletproof” Residential Networks

by KrebsSecurity

Cybercrooks increasingly are anonymizing their malicious traffic by routing it through residential broadband and wireless data connections. Traditionally, those connections have been mainly hacked computers, mobile phones, or home routers. But this story is about so-called “bulletproof residential VPN services” that appear to be built by purchasing or otherwise acquiring discrete chunks of Internet addresses from some of the world’s largest ISPs and mobile data providers.

In late April 2019, KrebsOnSecurity received a tip from an online retailer who’d seen an unusual number of suspicious transactions originating from a series of Internet addresses assigned to a relatively new Internet provider based in Maryland called Residential Networking Solutions LLC.

Now, this in itself isn’t unusual; virtually every provider has the occasional customers who abuse their access for fraudulent purposes. But upon closer inspection, several factors caused me to look more carefully at this company, also known as “Resnet.”

An examination of the IP address ranges assigned to Resnet shows that it maintains an impressive stable of IP blocks — totaling almost 70,000 IPv4 addresses — many of which had until quite recently been assigned to someone else.

Most interestingly, about ten percent of those IPs — more than 7,000 of them — had until late 2018 been under the control of AT&T Mobility. Additionally, the WHOIS registration records for each of these mobile data blocks suggest Resnet has been somehow reselling data services for major mobile and broadband providers, including AT&T, Verizon, and Comcast Cable.

Drilling down into the tracts of IPs assigned to Resnet’s core network indicates those 7,000+ mobile IP addresses under Resnet’s control were given the label  “Service Provider Corporation” — mostly those beginning with IPs in the range 198.228.x.x.

An Internet search reveals this IP range is administered by the Wireless Data Service Provider Corporation (WDSPC), a non-profit formed in the 1990s to manage IP address ranges that could be handed out to various licensed mobile carriers in the United States.

Back when the WDSPC was first created, there were quite a few mobile wireless data companies. But today the vast majority of the IP space managed by the WDSPC is leased by AT&T Mobility and Verizon Wireless — which have gradually acquired most of their competing providers over the years.

A call to the WDSPC revealed the nonprofit hadn’t leased any new wireless data IP space in more than 10 years. That is, until the organization received a communication at the beginning of this year that it believed was from AT&T, which recommended Resnet as a customer who could occupy some of the company’s mobile data IP address blocks.

“I’m afraid we got duped,” said the person answering the phone at the WDSPC, while declining to elaborate on the precise nature of the alleged duping or the medium that was used to convey the recommendation.

AT&T declined to discuss its exact relationship with Resnet  — or if indeed it ever had one to begin with. It responded to multiple questions about Resnet with a short statement that said, “We have taken steps to terminate this company’s services and have referred the matter to law enforcement.”

Why exactly AT&T would forward the matter to law enforcement remains unclear. But it’s not unheard of for hosting providers to forge certain documents in their quest for additional IP space, and anyone caught doing so via email, phone or fax could be charged with wire fraud, which is a federal offense that carries punishments of up to $500,000 in fines and as much as 20 years in prison.

WHAT IS RESNET?

The WHOIS registration records for Resnet’s main Web site, resnetworking[.]com, are hidden behind domain privacy protection. However, a cursory Internet search on that domain turned up plenty of references to it on Hackforums[.]net, a sprawling community that hosts a seemingly never-ending supply of up-and-coming hackers seeking affordable and anonymous ways to monetize various online moneymaking schemes.

One user in particular — a Hackforums member who goes by the nickname “Profitvolt” — has spent several years advertising resnetworking[.]com and a number of related sites and services, including “unlimited” AT&T 4G/LTE data services, and the immediate availability of more than 1 million residential IPs that he suggested were “perfect for botting, shoe buying.”

Profitvolt advertises his mobile and residential data services as ideal for anyone who wishes to run “various bots,” or “advertising campaigns.” Those services are meant to provide anonymity when customers are doing things such as automating ad clicks on platforms like Google Adsense and Facebook; generating new PayPal accounts; sneaker bot activity; credential stuffing attacks; and different types of social media spam.

For readers unfamiliar with this term, “shoe botting” or “sneaker bots” refers to the use of automated bot programs and services that aid in the rapid acquisition of limited-release, highly sought-after designer shoes that can then be resold at a profit on secondary markets. All too often, it seems, the people who profit the most in this scheme are using multiple sets of compromised credentials from consumer accounts at online retailers, and/or stolen payment card data.

To say shoe botting has become a thorn in the side of online retailers and regular consumers alike would be a major understatement: A recent State of The Internet Security Report (PDF) from Akamai (an advertiser on this site) noted that such automated bot activity now accounts for almost half of the Internet bandwidth directed at online retailers. The prevalance of shoe botting also might help explain Footlocker‘s recent $100 million investment in goat.com, the largest secondary shoe resale market on the Web.

In other discussion threads, Profitvolt advertises he can rent out an “unlimited number” of so-called “residential proxies,” a term that describes home or mobile Internet connections that can be used to anonymously relay Internet traffic for a variety of dodgy deals.

From a ne’er-do-well’s perspective, the beauty of routing one’s traffic through residential IPs is that few online businesses will bother to block malicious or suspicious activity emanating from them.

That’s because in general the pool of IP addresses assigned to residential or mobile wireless connections cycles intermittently from one user to the next, meaning that blacklisting one residential IP for abuse or malicious activity may only serve to then block legitimate traffic (and e-commerce) from the next user who gets assigned that same IP.

A BULLETPROOF PLAN?

In one early post on Hackforums, Profitvolt laments the untimely demise of various “bulletproof” hosting providers over the years, from the Russian Business Network and Atrivo/Intercage, to McColo3FN and Troyak, among others.

All of these Internet providers had one thing in common: They specialized in cultivating customers who used their networks for nefarious purposes — from operating botnets and spamming to hosting malware. They were known as “bulletproof” because they generally ignored abuse complaints, or else blamed any reported abuse on a reseller of their services.

In that Hackforums post, Profitvolt bemoans that “mediums which we use to distribute [are] locking us out and making life unnecessarily hard.”

“It’s still sketchy, so I am not going all out to reveal my plans, but currently I am starting off with a 32 GB RAM server with a 1 GB unmetered up-link in a Caribbean country,” Profitvolt told forum members, while asking in different Hackforums posts whether there are any other users from the dual-island Caribbean nation of Trinidad and Tobago on the forum.

“To be quite honest, the purpose of this is to test how far we can stretch the leniency before someone starts asking questions, or we start receiving emails,” Profitvolt continued.

KrebsOnSecurity started asking questions of Resnet after stumbling upon several indications that this company was enabling different types of online abuse in bite-sized monthly packages. The site resnetworking[.]com appears normal enough on the surface, but a review of the customer packages advertised on it suggests the company has courted a very specific type of client.

“No bullshit, just proxies,” reads one (now hidden or removed) area of the site’s shopping cart. Other promotions advertise the use of residential proxies to promote “growth services” on multiple social media platforms including CraigslistFacebookGoogleInstagramSpotifySoundcloud and Twitter.

residential-network[.]com, also known as “IAPS Security Services” (formerly intl-alliance[.]com), which advertises the sale of residential VPNs and mobile 4G/IPv6 proxies aimed at helping customers avoid being blocked while automating different types of activity, from mass-creating social media and email accounts to bulk message sending on platforms like WhatsApp and Facebook.

WHO IS RESNET?

Resnetworking[.]com lists on its home page the contact phone number 202-643-8533. That number is tied to the registration records for several domains, including resnetworking[.]com, residentialvpn[.]info, and residentialvpn[.]org. All of those domains also have in their historic WHOIS records the name Joshua Powder and Residential Networking Solutions LLC.

Running a reverse WHOIS lookup via Domaintools.com on “Joshua Powder” turns up almost 60 domain names — most of them tied to the email address joshua.powder@gmail.com. Among those are resnetworking[.]info, resvpn[.]com/net/org/info, tobagospeaks[.]com, tthack[.]com and profitvolt[.]com. Recall that “Profitvolt” is the nickname of the Hackforums user advertising resnetworking[.]com.

The email address josh@tthack.com was used to register an account on the scammer-friendly site blackhatworld[.]com under the nickname “BulletProofWebHost.” Here’s a list of domains registered to this email address.

A search on the Joshua Powder and tthack email addresses at Hyas, a startup that specializes in combining data from a number of sources to provide attribution of cybercrime activity, further associates those to mafiacloud@gmail.com and to the phone number 868-360-9983, which is a mobile number assigned by Digicel Trinidad and Tobago Ltd. A full list of domains tied to that 868- number is here.

Hyas’s service also pointed to this post on the Facebook page of the Prince George’s County Economic Development Corporation in Maryland, which appears to include a 2017 photo of Mr. Powder posing with county officials.

‘A GLORIFIED SOLUTIONS PROVIDER’

Roughly three weeks ago, KrebsOnSecurity called the 202 number listed at the top of resnetworking[.]com. To my surprise, a man speaking in a lovely Caribbean-sounding accent answered the call and identified himself as Josh Powder. When I casually asked from where he’d acquired that accent, Powder said he was a native of New Jersey but allowed that he has family members who now live in Trinidad and Tobago.

Powder said Residential Networking Solutions LLC is “a normal co-location Internet provider” that has been in operation for about three years and employs some 65 people.

“You’re not the first person to call us about residential VPNs,” Powder said. “In the past, we did have clients that did host VPNs, but it’s something that’s been discontinued since 2017. All we are is a glorified solutions provider, and we broker and lease Internet lines from different companies.”

When asked about the various “botting” packages for sale on Resnetworking[.]com, Powder replied that the site hadn’t been updated in a while and that these were inactive offers that resulted from a now-discarded business model.

“When we started back in 2016, we were really inexperienced, and hired some SEO [search engine optimization] firms to do marketing,” he explained. “Eventually we realized that this was creating a shitstorm, because it started to make us look a specific way to certain people. So we had to really go through a process of remodeling. That process isn’t complete, and the entire web site is going to retire in about a week’s time.”

Powder maintains that his company does have a contract with AT&T to resell LTE and 4G data services, and that he has a similar arrangement with Sprint. He also suggested that one of the aforementioned companies which partnered with Resnet — IAPS Security Services — was responsible for much of the dodgy activity that previously brought his company abuse complaints and strange phone calls about VPN services.

“That guy reached out to us and he leased service from us and nearly got us into a lot of trouble,” Powder said. “He was doing a lot of illegal stuff, and I think there is an ongoing matter with him legally. That’s what has caused us to be more vigilant and really look at what we do and change it. It attracted too much nonsense.”

Interestingly, when one visits IAPS Security Services’ old domain — intl-alliance[.]com — it now forwards to resvpn[.]com, which is one of the domains registered to Joshua Powder.

Shortly after our conversation, the monthly packages I asked Powder about that were for sale on resnetworking[.]com disappeared from the site, or were hidden behind a login. Also, Resnet’s IPv6 prefixes (a la IAPS Security Services) were removed from the company’s list of addresses. At the same time, a large number of Profitvolt’s posts prior to 2018 were deleted from Hackforums.

EPILOGUE

It appears that the future of low-level abuse targeting some of the most popular Internet destinations is tied to the increasing willingness of the world’s biggest ISPs to resell discrete chunks of their address space to whomever is able to pay for them.

Earlier this week, I had a Skype conversation with an individual who responded to my requests for more information from residential-network[.]com, and this person told me that plenty of mobile and land-line ISPs are more than happy to sell huge amounts of IP addresses to just about anybody.

“Mobile providers also sell mass services,” the person who responded to my Skype request offered. “Rogers in Canada just opened a new package for unlimited 4G data lines and we’re currently in negotiations with them for that service as well. The UK also has 4G providers that have unlimited data lines as well.”

The person responding to my Skype messages said they bought most of their proxies from a reseller at customproxysolutions[.]com, which advertises “the world’s largest network of 4G LTE modems in the United States.”

He added that “Rogers in Canada has a special offer that if you buy more than 50 lines you get a reduced price lower than the $75 Canadian Dollar price tag that they would charge for fewer than 50 lines. So most mobile ISPs want to sell mass lines instead of single lines.”

It remains unclear how much of the Internet address space claimed by these various residential proxy and VPN networks has been acquired legally or through other means. But it seems that Resnet and its business associates are in fact on the cutting edge of what it means to be a bulletproof Internet provider today.

The Risk of Weak Online Banking Passwords

Krebs on Security

Brian Krebs

The Risk of Weak Online Banking Passwords

If you bank online and choose weak or re-used passwords, there’s a decent chance your account could be pilfered by cyberthieves — even if your bank offers multi-factor authentication as part of its login process. This story is about how crooks increasingly are abusing third-party financial aggregation services like MintPlaidYodleeYNAB and others to surveil and drain consumer accounts online.

Crooks are constantly probing bank Web sites for customer accounts protected by weak or recycled passwords. Most often, the attacker will use lists of email addresses and passwords stolen en masse from hacked sites and then try those same credentials to see if they permit online access to accounts at a range of banks.

A screenshot of a password-checking tool being used to target Chase Bank customers who re-use passwords from other sites. Image: Hold Security.

From there, thieves can take the list of successful logins and feed them into apps that rely on application programming interfaces (API)s from one of several personal financial data aggregators which help users track their balances, budgets and spending across multiple banks.

A number of banks that do offer customers multi-factor authentication — such as a one-time code sent via text message or an app — have chosen to allow these aggregators the ability to view balances and recent transactions without requiring that the aggregator service supply that second factor. That’s according to Brian Costello, vice president of data strategy at Yodlee, one of the largest financial aggregator platforms.

Costello said while some banks have implemented processes which pass through multi-factor authentication (MFA) prompts when consumers wish to link aggregation services, many have not.

“Because we have become something of a known quantity with the banks, we’ve set up turning off MFA with many of them,” Costello said.  “Many of them are substituting coming from a Yodlee IP or agent as a factor because banks have historically been relying on our security posture to help them out.”

Such reconnaissance helps lay the groundwork for further attacks: If the thieves are able to access a bank account via an aggregator service or API, they can view the customer’s balance(s) and decide which customers are worthy of further targeting.

This targeting can occur in at least one of two ways. The first involves spear phishing attacks to gain access to that second authentication factor, which can be made much more convincing once the attackers have access to specific details about the customer’s account — such as recent transactions or account numbers (even partial account numbers).

The second is through an unauthorized SIM swap, a form of fraud in which scammers bribe or trick employees at mobile phone stores into seizing control of the target’s phone number and diverting all texts and phone calls to the attacker’s mobile device.

But beyond targeting customers for outright account takeovers, the data available via financial aggregators enables a far more insidious type of fraud: The ability to link the target’s bank account(s) to other accounts that the attackers control.

That’s because PayPalZelle, and a number of other pure-play online financial institutions allow customers to link accounts by verifying the value of microdeposits. For example, if you wish to be able to transfer funds between PayPal and a bank account, the company will first send a couple of tiny deposits  — a few cents, usually — to the account you wish to link. Only after verifying those exact amounts will the account-linking request be granted.

Alex Holden is founder and chief technology officer of Hold Security, a Milwaukee-based security consultancy. Holden and his team closely monitor the cybercrime forums, and he said the company has seen a number of cybercriminals discussing how the financial aggregators are useful for targeting potential victims.

Holden said it’s not uncommon for thieves in these communities to resell access to bank account balance and transaction information to other crooks who specialize in cashing out such information.

“The price for these details is often very cheap, just a fraction of the monetary value in the account, because they’re not selling ‘final’ access to the account,” Holden said. “If the account is active, hackers then can go to the next stage for 2FA phishing or social engineering, or linking the accounts with another.”

Currently, the major aggregators and/or applications that use those platforms store bank logins and interactively log in to consumer accounts to periodically sync transaction data. But most of the financial aggregator platforms are slowly shifting toward using the OAuth standard for logins, which can give banks a greater ability to enforce their own fraud detection and transaction scoring systems when aggregator systems and apps are initially linked to a bank account.

That’s according to Don Cardinal, managing director of the Financial Data Exchange (FDX), which is seeking to unite the financial industry around a common, interoperable, and royalty-free standard for secure consumer and business access to their financial data.

“This is where we’re going,” Cardinal said. “The way it works today, you the aggregator or app stores the credentials encrypted and presents them to the bank. What we’re moving to is [an account linking process] that interactively loads the bank’s Web site, you login there, and the site gives the aggregator an OAuth token. In that token granting process, all the bank’s fraud controls are then direct to the consumer.”

Alissa Knight, a senior analyst with the Aite Group, a financial and technology analyst firm, said such attacks highlight the need to get rid of passwords altogether. But until such time, she said, more consumers should take full advantage of the strongest multi-factor authentication option offered by their bank(s), and consider using a password manager, which helps users pick and remember strong and unique passwords for each Web site.

“This is just more empirical data around the fact that passwords just need to go away,” Knight said. “For now, all the standard precautions we’ve been giving consumers for years still stand: Pick strong passwords, avoid re-using passwords, and get a password manager.”

Some of the most popular password managers include 1PasswordDashlaneLastPass and KeepassWired.com recently published a worthwhile writeup which breaks down each of these based on price, features and usability.

Why Every Organization Needs an Incident Response Plan

Kacy Zurkus

Kacy Zurkus

Edge Articles

Why Every Organization Needs an Incident Response Plan

OK, perhaps that’s obvious. The question is, how come so many organizations still wait for something to happen to trigger their planning?

It’s human nature to procrastinate, especially when people aren’t quite sure of the right way to approach a task.

But when it comes to an incident response (IR) plan, the time to develop one is before a security breach occurs. Unfortunately, far too often it takes an incident to trigger planning.

And that, all security pros know, is far from ideal.

Why Do I Need an Incident Response Plan?

Having an IR plan in place is a critical part of a successful security program. Its purpose is to establish and test clear measures that an organization could and likely should take to reduce the impact of a breach from external and internal threats.

While not every attack can be prevented, an organization’s IR stance should emphasize anticipation, agility, and adaptation, says Chris Morales, head of security analytics at Vectra.

“With a successful incident response program, damage can be mitigated or avoided altogether,” Morales says. “Enterprise architecture and systems engineering must be based on the assumption that systems or components have either been compromised or contain undiscovered vulnerabilities that could lead to undetected compromises. Additionally, missions and business functions must continue to operate in the presence of compromise.”

The capabilities of an IR program are often measured on the level of an organization’s maturity, which defines how proactive an organization is. Companies that are able to map policies to the level of risk appropriate to the business are better prepared in the event of a security incident.

By way of example, Morales explains that the goal for a small business should be to reach a level of repeatable process, which includes having a maintained plan, concrete roles and responsibilities, lines of communication, and established response procedures. These are the necessary stepping stones that would allow it to appropriately address the bulk of incidents it would likely see.

“However, for organizations with highly valuable information with a high-risk level, a formal plan is not enough, and they need to be much more intelligence-driven and proactive in threat-hunting capabilities,” Morales says.

Starting from Scratch

Many companies find themselves in the position of having to start writing their IR plans from scratch, as was once the case for Trish Dixon, vice president of IronNet’s Cyber Operations Center (CyOC).

“It was an interesting dynamic to think that you can just jump right in and start writing an incident response plan when you haven’t really taken into account the rest of your company’s policies,” Dixon says.

Without knowing a company’s continuity plan, failovers, or its most critical systems, it’s impossible to write an IR plan that understands the impact an incident will have.

If, for example, the most critical part of the business is its infrastructure, you can’t have an effective IR plan without knowing how long it can be down before it starts costing the company money, Dixon says.

“From doing a business-impact analysis, it’s a lot easier to start mapping out and designing your incident response framework around that,” she says.

While there is no “right way” to design a plan, there are best practices, such as those set forth in the NIST framework, for creating and testing an optimal IR plan that will allow organizations to be more resilient in the event of a cyberattack.

At the very least, every organization should have a framework or concept down to understand the critical steps to take in the event of an incident.

“As you continue to evaluate your policies when you audit them, make sure the IR plan and policy are updated as well,” Dixon advises. Reviewing a policy once annually is absolutely not enough.

Auditing the IR policy quarterly is in line with best practices, but Dixon says organizations have to test it almost on a daily basis.

“You may have an event come in that’s not necessarily categorized as an incident,” she says, “but you should always refer back to your incident response plan to be able to say, ‘Had this event been this type of incident, what would we have done?'”

Measuring IR Success
Testing IR daily creates a necessary and inquisitive mindset that habitually asks “if this had been X” in order to determine whether an incident is escalated and/or who to contact. Companies need to gain as much information as possible so as to act on the presence of attackers.

Being proactive allows organizations to better react with a deeper understanding of the threat actor’s intentions and how the organization’s defenses relate to potential threats. That’s why threat awareness is one of the core metrics used to assess an organization’s maturity and capabilities for IR success, Morales says.

Every detail and every event that happens can help defenders decide what to do in response to an incident so they are better positioned to quickly and sufficiently isolate, adapt, and return to normal business operations should they ever encounter a worst-case scenario.

A lot of organizations begin with an incident response framework, such as NIST’s “Computer Security Incident Handling Guide,” and use that as a guide for developing a unique IR plan specific to the company. But understanding who all of the players are is one of the most critical starting points when developing or updating an IR plan.

Indeed, people can get tunnel vision within their operations centers and forget they may need to involve the business section, sales, and IT, so those people are not written into the plan, Dixon says.

What’s most important for organizations to keep in mind is that the IR plan needs to be applicable to their business.

“A framework is a framework. It’s a recommendation for best practices. It’s not meant to suggest that every situation is applicable to all organizations across the world,” Dixon says. “People need to be comfortable with adjusting the frameworks to apply to their organization.”

Image Source: TeraVector via Adobe Stock

 

Kacy Zurkus is a cybersecurity and InfoSec freelance writer as well as a content producer for Reed Exhibition’s security portfolio. Zurkus is a regular contributor to Security Boulevard and IBM’s Security Intelligence. She has also contributed to several publications, … View Full Bio

Louisiana Declares Cybersecurity State of Emergency

Dark Reading Staff

Louisiana Declares Cybersecurity State of Emergency

A series of attacks on school district systems leads the governor to declare the state’s first cybersecurity state of emergency.

Louisiana is no stranger to declarations of emergency, but it never had one for a cybersecurity emergency — until this week. A series of attacks on school districts around the state led Governor John Bel Edwards to issue the declaration that brings new resources and statewide coordination to what had been a collection of local cybersecurity events.

By issuing the formal declaration, the governor allows statewide resources from the Louisiana National Guard, Louisiana State Police, Louisiana Office of Technology Services, and Louisiana State University, led by the state Office of Homeland Security and Emergency Preparedness, to be brought to bear on defense, analysis, and remediation efforts. These state resources will join federal resources that have already been briefed, as well as local cybersecurity teams, to address the attacks.

This is not the first time a state emergency declaration has been issued for cyberattacks; in 2016, Colorado governor John Hickenlooper declared a state of emergency due to attacks on that state’s department of transportation.

For more, read here.

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

I found your data. It’s for sale.

Technology columnist

July 18

I’ve watched you check in for a flight and seen your doctor refilling a prescription.

I’ve peeked inside corporate networks at reports on faulty rockets. If I wanted, I could’ve even opened a tax return you only shared with your accountant.

I found your data because it’s for sale online. Even more terrifying: It’s happening because of software you probably installed yourself.

My latest investigation into the secret life of our data is not a fire drill. Working with an independent security researcher, I found as many as 4 million people have been leaking personal and corporate secrets through Chrome and Firefox. Even a colleague in The Washington Post’s newsroom got caught up. When we told browser makers Google and Mozilla, they shut these leaks immediately — but we probably identified only a fraction of the problem. Extensions, little programs also known as add-ons and plug-ins, hang out in the top right corner of your browser. (Geoffrey Fowler/The Washington PostT)

The root of this privacy train wreck is browser extensions. Also known as add-ons and plug-ins, they’re little programs used by nearly half of all desktop Web surfers to make browsing better, such as finding coupons or remembering passwords. People install them assuming that any software offered in a store run by Chrome or Firefox has got to be legit.

Not. At. All. Some extensions have a side hustle in spying. From a privileged perch in your browser, they pass information about where you surf and what you view into a murky data economy. Think about everything you do in your browser at work and home — it’s a digital proxy for your brain. Now imagine those clicks beaming out of your computer to be harvested for marketers, data brokers or hackers.

Some extensions make surveillance sound like a sweet deal: This week, Amazon was offering people $10 to install its Assistant extension. In the fine print, Amazon said the extension collects your browsing history and what’s on the pages you view, though all that data stays inside the giant company. (Amazon CEO Jeff Bezos owns The Washington Post.) Academic researchers say there are thousands of extensions that gather browsing data — many with loose or downright deceptive data practices — lurking in the online stores of Google and even the more privacy-friendly Mozilla.

The extensions we found selling your data show just how dangerous browser surveillance can be. What’s unusual about this leak is that we got to watch it taking place. This isn’t a theoretical privacy problem: Here’s exactly how millions of people’s data got grabbed and sold — and the failed safeguards from browser makers that let it happen.

A ‘catastrophic’ leak

I didn’t realize the scale of the extension problem until I heard from Sam Jadali. He runs a website hosting business, and earlier this year found some of his clients’ data for sale online. Figuring out how that happened became a six-month obsession.

Jadali found the data on a website called Nacho Analytics. Just one small player in the data economy, Nacho bills itself on its website as a marketing intelligence service. It offers data about what’s being clicked on at almost any website — including actual Web addresses — for as little as $49 per month.

That data, Nacho claims, comes from people who opt in to being tracked, and it redacts personally identifiable information.

The deeper Jadali looked on Nacho, the more he found that went way beyond marketing data. Web addresses — everything you see after the letters “http” — page titles and other browsing records might not seem like they’d expose much. But sometimes they contain secrets sites forget to hide away.

Jadali found usernames, passwords and GPS coordinates, even though Nacho said it scrubs personal information from its data. “I started realizing this was a leak on a catastrophic scale,” Jadali told me.

What he showed me made my jaw drop. Three examples:

  • From DrChrono, a medical records service, we saw the names of patients, doctors, and even medications. From another service, called Kareo, we saw patient names.
  • From Southwest, we saw the first and last names, as well as confirmation numbers, of people checking into flights. From United, we saw last names and passenger record numbers.
  • From OneDrive, Microsoft’s cloud storage service, we saw a hundred documents named “tax.” We didn’t click on any of these links to avoid further exposing sensitive data.

It wasn’t just personal secrets. Employees from more than 50 major corporations were exposing what they were working on (including top-secret stuff) in the titles of memos and project reports. There was even information about internal corporate networks and firewall codes. This should make IT security departments very nervous.

Jadali documented his findings in a report titled “DataSpii,” and has spent the last two weeks disclosing the leaks to the companies he identified — many of which he thinks could do a better job keeping secrets out of at-risk browser data. I also contacted all the companies I name in this column. Kareo and Southwest told me they’re removing names from page data.

I wondered if Jadali could find any data from inside The Washington Post. Shortly after I asked, Jadali asked me if I had a colleague named Nick Mourtoupalas. On Nacho, Jadali could see him clicking on our internal websites. Mourtoupalas had just viewed a page about the summer interns. Over months, he’d probably leaked much, much more.

I called up Mourtoupalas, a newsroom copy aide. Pardon the interruption, I said, but your browser is leaking.

“Oh, wow, oh, wow,” Mourtoupalas said. He hadn’t ever “opted in” to having his Web browsing tracked. “What have I done wrong?”

Follow the data

I asked Mourtoupalas if he’d ever added anything to Chrome. He pulled up his extensions dashboard and found he’d installed 17 of them. “I didn’t download anything crazy or shady looking,” he said.

One of them was called Hover Zoom. It markets itself in the Chrome Web Store and its website as a way to enlarge photos when you put your mouse over them. Mourtoupalas remembered learning about it on Reddit. Earlier this year, it had 800,000 users.

When you install Hover Zoom, a message pops up saying it can “read and change your browsing history.” There’s little indication Hover Zoom is in the business of selling that data.

I tried to reach all the contacts I could find for Hover Zoom’s makers. One person, Romain Vallet, told me he hadn’t been its owner for several years, but declined to say who was now. No one else replied.

Jadali tested the links between extensions and Nacho by installing a bunch himself and watching to see if his data appeared for sale. We did some of these together, with me as a willing victim. After I installed an extension called PanelMeasurement, Jadali showed me how he could access private iPhone and Facebook photos I’d opened in Chrome, as well as a OneDrive document I had named “Geoff’s Private Document.” (To find the latter, all he had to do was search page titles on Nacho for “Geoff.”)

In total, Jadali’s research identified six suspect Chrome and Firefox extensions with more than a few users: Hover Zoom, SpeakIt!, SuperZoom, SaveFrom.net Helper, FairShare Unlock and PanelMeasurement.

They all state in either their terms of service, privacy policies or descriptions that they may collect data. But only two of them — FairShare Unlock and PanelMeasurement — explicitly highlight to users that they collect browser activity data and promise to reward people for surfing the Web.

“If I’ve fallen in for using this extension, I know hundreds of thousands of other people easily have also,” Mourtoupalas told me. He’s now turned off all but three extensions, each from a well-known company.

The tip of the iceberg

After we disclosed the leaks to browser makers, Google remotely deactivated seven extensions, and Mozilla did the same to two others (in addition to one it disabled in February). Together, they had tallied more than 4 million users. If you had any of them installed, they should no longer work.

A firm called DDMR that made FairShare Unlock and PanelMeasurement told me the ban was unfair because it sought user consent. (It declined to say who its clients were, but said its terms prohibited customers from selling confidential information.) None of the other extension makers answered my questions about why they collected browsing data.

A few days after the shutdown, Nacho posted a notice on its website that it had suffered a “permanent” data outage and would no longer take on new clients, or provide new data for existing ones.

But that doesn’t mean this problem is over.

North Carolina State University researchers recently tested how many of the 180,000 available Chrome extensions leak privacy-sensitive data. They found 3,800 such extensions — and the 10 most popular alone have more than 60 million users.

“Not all of these companies are malicious, or doing this on purpose, but they have the ability to sell your data if they want,” said Alexandros Kapravelos, a computer science professor who worked on the study.

Extension makers sometimes cash out by selling to companies that convert their popular extensions into data Hoovers. The 382 extensions Kapravelos suspects are in the data-sale business have nearly 8 million users. “There is no regulation that prevents them from doing this,” he said.

So why aren’t Google and Mozilla stopping it? Researchers have been calling out nefarious extensions for years, and the companies say they vet what’s in their stores. “We want Chrome extensions to be safe and privacy-preserving, and detecting policy violations is essential to that effort,” said Google senior director Margret Schmidt.

But clearly it’s insufficient. Jadali found two extensions waited three to five weeks to begin leaking data, and he suspects they may have delayed to avoid detection. Google recently announced it would begin requiring extensions to minimize the data they access, among other technical changes. Mozilla said its recent focus has also been on limiting the damage add-ons can do.

Just as big a problem is a data industry that’s grown cavalier about turning our lives into its raw material.

In an interview, Nacho CEO Mike Roberts wouldn’t say where he sourced his data. But Jadali, he said, violated Nacho’s terms of service by looking at personal information. “No actual Nacho Analytics customer was looking at this stuff. The only people that saw any private information was you guys,” Roberts said.

I’m not certain how he could know that. There were so many secrets on Nacho that tracking down all the ways they might have been used is impossible.

His defense of Nacho boiled down to this: It’s just the way the Internet works.

Roberts said he believed the people who contributed data to Nacho — including my colleague — were “informed.” He added: “I guess it wouldn’t surprise me if some people aren’t aware of what every tool or website does with their data.”

Nacho is not so different, he said, from others in his industry. “The difference is that I wanted to level the playing field and put the same power into the hands of marketers and entrepreneurs — and that created a lot more transparency,” he said. “In a way, that transparency can be like looking into a black mirror.”

He’s not entirely wrong. Large swaths of the tech industry treat tracking as an acceptable way to make money, whether most of us realize what’s really going on. Amazon will give you a $10 coupon for it. Google tracks your searches, and even your activity in Chrome, to build out a lucrative dossier on you. Facebook does the same with your activity in its apps, and off.

Of course, those companies don’t usually leave your personal information hanging out on the open Internet for sale. But just because it’s hidden doesn’t make it any less scary.

Geoffrey A. FowlerGeoffrey A. Fowler is The Washington Post’s technology columnist based in San Francisco. He joined The Post in 2017 after 16 years with the Wall Street Journal writing about consumer technology, Silicon Valley, national affairs and China.