Working Remote Without Privacy Violations

COVID-19 revolutionized the need for remote work by employees.  And the trend toward working remote likely will continue after the outbreak is a distant memory. However, the privacy and cybersecurity implications surrounding these remote workers are often either unknown and/or ignored.  So now what?  With more of your employees working off-site, how do you protect your company against privacy violations of state, federal and international law?

The first step is to review your privacy policy.  Is it too lax?  Is it too strict?  Either extreme creates its own issues such as inefficiency for remote workers or potential data breaches.  The policy must contain clear penalties for violations.  Violations must be tracked and the penalties enforced for the privacy policy to fulfill its purpose.

The second step is to make sure that every employee, vendor and client, is aware of the privacy policy and where appropriate, commits to the privacy policy with either a physical or digital signature.  These acknowledgements must be stored and organized by privacy policy version. As the privacy policy is amended from time to time, it is important to determine whether an additional acknowledgement is required from your employees, vendors and clients.

The third step is to train employees on how to abide by the privacy policy.  A policy is useless if no one understands it or is unsure how to apply it to their employment duties. With remote workers, this becomes even more critical as data that may permissibly be left on a desk or sent in an email on a secure network, may not be appropriate in a remote working environment.  Remote workers need to use Virtual Private Networks (VPN) to access company systems.  Companies should verify that each remote worker is using a VPN while working remotely.

The final step requires taking a second look at your data, the processing of the data and specific business sector regulations such as the Graham-Leahy Bliley Act in the financial sector.  During this review it is important to identify new risks posed by remote workers.  One way of achieving this review is to either assign or hire a Chief Information Officer (CIO) to coordinate and stay abreast of the latest trends and developments.

Another aspect of cybersecurity and privacy that must be evaluated and implemented wherever possible is Privacy Enhancing Technology (PET).  These various technologies (there are five) allow for a greater use of data while removing all identifiable information and resisting attempts to reconstruct personal information by combining an anonymous data set with a data set that “decodes” the first set, such as Census data or voter registration databases.  More information on PET can be found here.

P&D attorneys can assist with all these recommendations with a cost effective and pragmatic approach.  Our attorneys routinely handle the most challenging privacy and cybersecurity issues and are ready and eager to help your company during these uncertain times.

When It Rains, It Pours: The Psychology that Makes Us More Vulnerable During a Crisis

I received the following email alert from a cybersecurity client of mine:

“6x increase in cyber attacks over the last 4 weeks.”

“Information about COVID-19 should only come from a legitamate source. Don’t trust unsolocited emails or open unknown links”

“Really?,” I thought to myself; “We’re on lock-down, stressed about family and friends, not to mention business and jobs, and I’m getting cybersecurity alerts?” Frankly, I usually ignore them when I’m not distracted, but who has time for this now? 

However, the more I thought about it, the more I realized that’s exactly what cybercriminals are thinking too and why people need to stay alert and resist the temptation to click on those compelling links.

The truth is, despite the fancy hardware and software solutions available, most cybersecurity breaches occur due to human error or phishing attacks. Unless you have relatively sophisticated automated solutions, the people IN your organization may represent your greatest internal threat.

While companies see high risks from external threat actors, such as unsophisticated hackers (59%), cyber criminals (57%), and social engineers (44%), the greatest danger, cited by 9 out of 10 firms, lies with untrained general (non-IT) staff. In addition, more than half see data sharing with partners and vendors as their main IT vulnerability. Nonetheless, less than a fifth of firms have made significant progress in training staff and partners on cybersecurity awareness (ESI ThoughtLab/WSJ Pro Cybersecurity, 2018).

And this was before COVID hit us between the eyes. Let’s take a quick look at the psychology at play that makes us even more vulnerable during a crisis.
The Neuroscience of Crisis

As humans, we are prewired for crisis. 

Whether you think of this brain system as the “reptilian brain,” attributed to Paul MacLean and his Triune Theory (Sagan, 1977), or the fight-flight reaction of the sympathetic nervous system (System 1) which is our immediate, emotional reaction (Kahneman, 2011), it is clear that our brain protects us in times of danger. 

This system, which is buried deep in the interior of the human brain, is both evolutionarily older and more immediate than simple cognitive thought; it is pre-cognitive. When the danger is ambiguous, System 2 thinking (which, in contrast with System 1 is slower, more deliberative and more logical) is nice; go through your options, take your time, don’t rush. 

But when there is a perception of crisis, the need to ACT is immediate. 

The fight-flight response makes us want to DO something, and now! From an evolutionary point of view, in times of danger, those who acted first were often safer than those who took their time.

The COVID-19 pandemic is, of course, a crisis. 

Have people noticed how much more tired they are these days, even though we aren’t even leaving the house? It’s because crisis mode requires more energy. During a crisis, the thoughtful, reflective parts of our brain shut down. In other circumstances, we might hover over a suspicious link, while we process whether it seems risky or not. 

But that requires fully functional frontal lobes, or executive functioning, which need time and undivided attention to work properly. In crisis mode, frontal lobe functioning is significantly diminished, or may go offline altogether, in favor of a quick (albeit less considered) action or reaction. 

To make matters worse, cybercriminals know this: They know what emotional buttons to push to make you afraid (just click the link) or try to help (just click the link), or maybe even register your opinion (just click the link). 

But if you do click that unfamiliar or disguised link, you may have just let criminals into your personal computer and, by default, into your company’s IT system. 

Wait, consider, relax. Let System 2 kick in before you commit yourself, your computer, and your company to whatever those “black hat” cybercriminals have in mind.

Motivation During a Crisis

After the fear comes a desire to help. 

This is one of the ways that cybercriminals trick well-meaning people. Whether it’s a donation, or a message of support, or some other activity to help, we are again motivated in ways that leave us open to online criminal behavior. 

McClelland’s Social Motive Theory suggests there are three primary social motives: Achievement, Affiliation, and Power (McClelland, 1987). 

We all have the capacity for all three, and genetics and socialization as well as cognitive choice determine which motive wins the day in a given situation. In times of individual crisis, needs for achievement (e.g., successful social distancing) or needs for power (e.g., controlling the situation) may come to the fore. 

But in a social crisis, many of us are “hard-wired” to help, triggering a need for affiliation. 

That desire to help may cause people to act impulsively in what they believe is a pro-social, affiliative manner. Just click the link to make your donation, just click the link to show your support, and on and on, the cybercriminals never stop trying. Like the very best advertisers, they are clever about pushing your emotional (non-cognitive, pre-cognitive) buttons to get you to act in ways that benefit them.

I am assuming everyone reading this has the best of motives. Those very motives make you susceptible to the manipulation of cybercriminals. 

If your current impulse is to put this away, turn to something else, then you have experienced exactly what cybercriminals are counting on. 

Information fatigue, too much bad news, or just a desire to put some positive energy back out into the world, may all leave you vulnerable. 

Don’t click suspicious links, or even links that look well-meaning, without doing some simple checks and reviews first. 

  • Hover over a link and see if the URL is the same as whom the email purports to be from. 
  • Don’t provide any information, on any social media, whether at work or elsewhere, that can be used against you. 
  • Hackers are clever and unscrupulous so check and double-check links that looks suspicious in any way. 
  • Do a bit of research before you agree to anything and certainly before sending money or private information.
What’s Your Story?

Narrative is the final pillar in this little tripartite approach to cybercrime. I have come to believe that personality is a story we tell ourselves (and the world) about ourselves (Bruner, 1985). 

This story comprises our identity, it is who we think we are and often these beliefs about who we are dictate how we behave in the world and how we process information. 

For example, as a psychologist (not to mention a human being), I think of myself as a helpful person. I try to be kind and considerate. I don’t like to walk past beggars without giving them something (yes, yes, I know that would cause me to lose points on the WAIS IQ test but there you go, despite my cognition telling me this could be a trick, he or she will just buy cigarettes and beer, I often give in anyway). 

Cybercriminals will use these ideal images we have of ourselves to manipulate our thoughts, emotions, and purse-strings. 

  • I am good, so I give to the sick and needy. 
  • I love children, so I’ll give to those orphaned by COVID. 
  • I support healthy behaviors, so I’ll do most anything to protect my health. 
  • I’m a good parent, so I will click the link that shows me 10 ways to protect my family from infection. 

Your personal narrative is the core of your personal identity. We sometimes value it more than life itself (think of martyrs). 

If a clever cybercriminal hacks your social media, understands what makes you “tick,” that information can be used against you in a cybercrime.

The threats are real and so are the psychological levers cybercriminals pull to manipulate your fear. 

We are all overwhelmed, trying our best to hang in there, and help each other where we can. Don’t let your best intentions, and fatigue, allow you to be manipulated to behave unsafely online. COVID is real, and so is cybercrime. We must be alert to both.

Written by: Dr. Mark Sirkin, CEO at Sirkin Advisors


Bruner, J. (1986). Actual minds, possible worlds. Cambridge, MA: Harvard University Press.

ESI ThoughtLabs/WSJ Pro Cybersecurity (2018). The cybersecurity imperative: Managing cyber risks in a world of rapid digital change. New York: Author.

Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux.

McClelland, D. (1987). Human motivation. New York: University of Cambridge.

Sagan, C. (1977). The dragons of Eden. New York: Penguin Random House.


Connecticut’s New Insurance Data Security Law: The Costs and Benefits of Compliance

An important section of the recent budget bill adopted by the state of Connecticut demonstrates that regulatory fever has become contagious, at least as far as data security is concerned. Section 230 of the recently adopted bill sets forth a comprehensive set of cybersecurity regulations for the state’s insurers, requiring them to comport with guidelines modeled after those developed by New York State’s Department of Financial Services (DFS).1 Connecticut insurers will now have to develop a “comprehensive written information security program,” evaluate the efficacy of that program “not less than annually,” and periodically aver to the state’s Insurance Commissioner that the law’s provisions are being followed.2 In addition, the law requires that insurers establish strict cybersecurity regulations for third parties and develop “incident response plan[s]” to recover in the wake of a cyberattack.3

The data security law also establishes a comprehensive enforcement regime to investigate and punish noncompliance. Under the provisions of Section 230, the state’s Insurance Commissioner has a broad investigative power to verify compliance with the new regulations.4 Furthermore, the Commissioner retains the power to punish recalcitrant insurers by revoking business licenses and issuing fines of up to fifty thousand dollars (provided that the offending firms have not shown themselves to be exempt in an evidentiary hearing).5 The law does contain some exceptions, however. For a one-year period between 2020 and 2021, insurers with fewer than twenty employees will be exempt from the law’s requirements, and from 2021 on insurers with fewer than ten employees will be exempt.6 Moreover, those firms already compliant with the requirements set forth in the Health Insurance Portability and Accountability Act of 1996 (a federal statute)7 are exempted from the Connecticut law if they can certify their compliance to state regulators.8 Nevertheless, compliance figures to be costly for Connecticut insurers.

As discussed on this blog previously, however, the cost of a cyberattack can often far outstrip the cost of compliance with cybersecurity regulations. This goes double for insurance companies, especially because such firms often possess “high-value consumer information, such as sensitive personal information, health information and payment card information.”9 Thanks to the creation of cybersecurity insurance, insurers are often left holding the bill in the wake of a devastating cyberattack elsewhere. Because they have presumably processed numerous such claims, they should know better than anyone else the true cost of a data breach. The aid of knowledgeable legal professionals and a healthy dose of common sense are all that stand in the way of cost-saving compliance with Connecticut’s new cybersecurity regulations.


  3. Ibid
  4. Ibid
  5. Ibid
  6. Ibid
  7. Better known as HIPAA

Data-Centric Security Strategies and Regulatory Compliance

In the wake of a recent spate of cybersecurity breaches, the practice of data-centric security has received renewed attention from business leaders concerned about the integrity of critical data. As defined by a PKWare white paper, data-centric security focuses on protecting data itself, rather than the systems that contain it.1 Central to the concept of data-centric security is the notion that the systems established to store and guard data sometimes crumble in the face of cyberattacks.1 Given that all manner of data storage systems have shown themselves to be vulnerable, it is hard to argue with this foundational principle. Rather than offering prescriptions for the improvement of systems, then, data-centric security places safeguards around the data itself – safeguards which are automatically applied and regularly monitored to ensure data security.1

Data-centric security strategies have several key advantages over the “network-centric” models currently employed by many firms.2 As discussed, data-centric strategies account for the proclivity of security networks to succumb to cyberattacks by securing the data itself. In addition, because security measures are built into data, “security travels with the data while it’s at rest, in use, and in transit,” a characteristic of data-centric strategies that facilitates secure data sharing and allows firms to move data from system to system without having to account for inevitable variations in security infrastructure.3 Moreover, data-centric security allows for easy access to data (a cornerstone of productivity in any firm) without compromising data security. In fact, Format-Preserving Encryption (FPE) – the specific type of encryption employed by many data-centric strategies4 – “maintains data usability in its protected form,” striking a balance between security and accessibility.5 Clearly, data-centric strategies provide stronger, more all-encompassing, and eminently manageable modes of data protection.

But perhaps the most important aspect of data-centric security is its essential role in any security regime compliant with New York State cybersecurity regulations. In fact, as the data security company Vera has noted, “the new rules are focused not just on protecting information systems but on securing, auditing and the disposition of data itself.”6 New York’s determination to advance data-centric security is evident in certain provisions of the recent cybersecurity regulation, the most important of which mandate that companies “restrict access privileges not only to systems but to the data itself.”6 Moreover, New York State’s cybersecurity regulations reflect the priorities of data-centric security because they require firms to “implement an audit trail system to reconstruct transactions and log access privileges,” a system which allows the security of individual pieces of data to be monitored automatically.6 New York regulators have already recognized the benefits of data-centric security strategies. Now, with the assistance of legal experts well-versed in cybersecurity compliance, companies concerned about their data security can too.



Cybersecurity Compliance Could Have Saved Capital One Millions

A recent cybersecurity breach involving one of the country’s largest financial services firms illustrates both the necessity of strong cybersecurity regulations and the imperative for credit card holders to jealousy safeguard their personal information. In a criminal complaint filed July 29th, 2019 at the U.S. District Court for the Western District of Washington, the federal government alleged that Paige A. Thompson, a computer engineer, had taken advantage of a gap in Capital One’s cloud security to obtain the personal financial records of millions of the company’s customers in the U.S. and abroad.1 Thompson, who used the online alias “erratic,” allegedly exploited a defect in Capital One’s firewall to access confidential financial information stored on the servers of the Cloud Computing Company, a Capital One service provider.1 Despite Capital One’s claim that “no credit card account numbers or log-in credentials were compromised and less than one percent of Social Security numbers were compromised,” the episode is a reminder that without robust cybersecurity measures and a broad-based commitment to personal data security, information stored with American financial institutions remains vulnerable to cyberattack.2 In fact, had Thompson been more careful to remain anonymous,3 the data breach could well have become catastrophic.

First, the data breach demonstrates the value of robust cybersecurity regulations. For example, if Capital One’s cybersecurity measures had met the stringent standards of the regulations issued by New York State’s Department of Financial Services that is now being enforced by the state’s new Cybersecurity Division, this problem may have been avoided. The DFS has committed itself to ensuring that “encryption and other robust security control measures” characterize the cybersecurity policies of the state’s financial services firms.5 Had Capital One encrypted or tokenized6 all of the data subject to the recent breach, it is possible that the effects of the cyberattack may have been less widespread. In fact, the criminal complaint against Thompson notes that “although some of the information” targeted by the cyberattack “has been tokenized or encrypted, other information[…]regarding their credit history has not been tokenized,” allowing “tens of millions” of credit card applications to be compromised.1 Of course, the cybersecurity regulations adopted by New York State are burdensome. But the alternative is even worse – especially considering that Capital One will “incur between $100 million and $150 million in costs related to the hack, including customer notifications, credit monitoring, tech costs and legal support,” a price tag that doubtless outstrips the costs of regulatory compliance.3

Pastore & Dailey is a leading firm in the drafting and implementation of procedures necessary to comply with federal and state securities and banking cybersecurity regulations and laws, which in this case could have saved Capital One millions if properly followed.

Second, the cyberattack bears out the importance of diligence in safeguarding financial information. According to Forbes, individuals worried about the security of their financial information can take a host of precautions: “[updating] passwords,” avoiding the use of e-mail accounts to share confidential information, “[establishing] two-factor authentication,” and so on.7 Cyberattacks like the one that recently struck Capital One have become a fact of life for many Americans who bank online, but they need not be costly. Common-sense precautions and security diligence can go a long way towards ensuring the integrity of your financial records.

New DFS Cybersecurity Division

Perhaps as a signal of its commitment to fight cybercrime and stringently enforce its cybersecurity regulations, New York State recently established a “cybersecurity division”1 within the state’s Department of Financial Services (DFS). The creation of the division marks yet another step taken by New York State to guard against the dangers posed by cyberattacks, perhaps motivated by its status as the home of many prominent financial services firms. In addition, the presence of the division strongly suggests that the cybersecurity regulation2 issued by DFS in Spring 2017 [WB1] cannot be taken lightly by the state’s largest and most important financial services firms. Aside from the comprehensive nature of the regulation and the sizable power afforded to the new cybersecurity division, the novelty of New York’s recent innovations in cybersecurity regulation suggests their importance and staying power. In fact, as JDSupra notes, the creation of the new division more or less completed a years long process that has made “New York[…]the only state in the country that has a banking and insurance regulator exclusively designated to protect consumers and companies from the ever-increasing risk of cyber threats.”1

Some financial services firms, conscious of their vulnerability to cyberattacks, will doubtless welcome these additional steps. As a report from the Identity Theft Resource Center notes, financial services firms “are reportedly hit by security incidents a staggering 300 times more frequently than businesses in other industries.”3 Far from being mere annoyances, these cyberattacks are often extremely costly. In fact, according to a study from IBM and the Ponemon Institute, the cost to a financial services firm per record lost in a cyberattack was more than $100 greater than the cost to the average company.4 Moreover, cyberattacks can also cripple consumer confidence in financial services firms, causing them to lose business and endure even greater costs.5 In general, then, cyberattacks can damage both a financial services firm’s sensitive records and its public image, making them a grave threat to any such company’s bottom line.

It would be a mistake, however, to think about DFS regulation purely in terms of cost reduction. Regulation also entails costs – not least because compliance with the 2017 regulation can be investigated and punished by DFS’ new cybersecurity division. In fact, these new developments indicate that cybersecurity will not come cheaply, especially because the regulation imposes a bevy of new security requirements on top firms, costing them a not insignificant amount of time and money. From multi-factor authentication to training programs to the appointment of a “Chief Information Security Officer,” the now fully enforceable regulation will force financial services firms to foot the bill for a host of cybersecurity measures.6

  3., pg. 3
  4. IBM and the Ponemon Institute, The Cost of a Data Breach (2017), summarized in, pg. 6
  5., pg. 8
  6., pg. 5

Technology Regulation in the Federal Securities Market

Pastore & Dailey LLC has an extensive RegTech practice, and Jack Hewitt, a P&D Partner, is one of the country’s authorities in this area.  In line with this, P&D is pleased to announce that Bloomberg BNA has just published Mr. Hewitt’s new treatise, Technology Regulation in the Federal Securities Markets.

The treatise is structured into three major segments – cybersecurity, the new market technologies and blockchain.  The cybersecurity segment provides a comprehensive review of all applicable federal and state regulations and guidelines while the market technology segment addresses, among others, the Cloud, robo-advisers and smart contracts.  The final segment, Blockchain, includes cryptocurrency, tokens and ICOs.  Mr. Hewitt, whose expertise extends to virtually all major business sectors, regularly reviews client cybersecurity and technology procedures and would be pleased to discuss performing one for your firm.

Please use the below link to view the Table of Contents and the chapters on Information Security Programs and ICOs of the new treatise.

Regulators Expect More with Vendor Risk Management

Banks and financial services firms continue to grapple with regulators’ growing demands to better manage cyber risks created by third-party vendors, but they should not focus solely on compliance, according to a panel of former regulators and cyber experts. Viewing third-party risk within a wider risk management framework would lead to greater security maturity, agreed the banking, legal, and cyber experts, who participated in a December 2017 webinar hosted by the Independent Community Bankers of America (ICBA) and CyberFortis, a cybersecurity solution service provider for the financial sector.

The panel included a former state banking commissioner, a former regulator who helped create the recently-implemented New York DFS cyber regulations, and a nationally known legal expert who works on cybersecurity cases, including those involving the Securities and Exchange Commission (SEC).

Although headlines tend to be dominated by cyberattacks on large banks or financial firms, organizations of any size are at risk, said panelist David Cotney, former Massachusetts Banking Commissioner and advisor to CyberFortis. He said he hears daily reports of hackers attacking the defenses of banks both large and small, including looking for easy entry points such as third parties connected to banks’ systems.

These known vulnerabilities have forced federal regulators, including the Office of the Comptroller of the Currency (OCC), to require financial institutions to have a strong vendor risk management system. This includes establishing risk tolerance, ongoing monitoring, and independent reviews. There is also an expectation that boards will be actively involved throughout the vendor risk management process.

Cotney issued a warning about compliance versus security, noting that some regulators have expressed private concerns that too many bankers think simply meeting the baseline expectation under the FFIEC’s Cybersecurity Assessment Tool (CAT) is sufficient. “Threats evolve and a bank’s environment is not static. They are changing their products and services, they are hiring and terminating employees, and their networks and IT environments are also undergoing changes and updates,” said Cotney. “Instead of thinking of the CAT as a ‘check the box annual exercise,’ use it to reexamine your inherent risk profile and maturity level prior to introducing new products, services, or initiatives, which includes new third-party connections or mergers and acquisitions,” he suggested.

To secure the assets of both a bank and its customers, it is necessary to move from a baseline approach (compliance-driven) to a higher maturity level approach (enterprise risk), which Cotney said is something regulators are specifically looking for in a bank’s security program.


The panel also addressed how the New York Department of Financial Services (NY DFS) regulation can be viewed as a bellwether for how all regulators are viewing cybersecurity risks and more specifically, third-party cybersecurity issues. “Must we be our brother’s keeper?” asked Alexander Sand, now an associate at Eversheds Sutherland and a former NY DFS regulator who helped create the new requirements. “To the extent that third parties are touching your network and holding your data, then yes,” he answered.

Because regulators want to see that financial institutions are making their decisions based on risk, having a risk assessment performed is crucial and will help with meeting strict NY DFS deadlines. There are both internal and external risk assessments involved, said Sand. “Internally, what are the risks to the bank’s ability to operate if a significant operational vendor goes down, and externally, what are the risks of third party security practices?” Although the Third-Party Service Provider Security Policy deadline is March 1, 2019, Sand said NY DFS expects this to be a large undertaking that organizations will need to begin addressing immediately.

The requirements are robust and include written policies and procedures based on the risk assessment that address:

  • Identifying and assessing the risks of third-party service providers
  • Setting out minimum cyber practices banks require
  • Establishing due diligence processes
  • Performing periodic assessments of the risks of third-party service providers

While this regulation does not require specific controls to be put in place for all vendors, Sand said it does emphasize certain controls that NY DFS wants organizations to consider, such as multi-factor authentication, encryption of data in transit and at rest, vendor breach notifications, and confirmation of vendor’s cybersecurity practices. “Give yourself plenty of time to deal with this third-party issue,” Sand concluded. “At the end of the day what will be better for your customers and your bank is to be proactive and thoughtful so that you’re meeting your organizations’ specific risks rather than pulling something off the shelf.”


“Banks will say to me, ‘I have 120 vendors. How can I get my arms around this?’” noted Jack Hewitt, Partner at Pastore and Dailey, LLP. He recommended that banks identify all vendors and prioritize by importance. Then look at the auditor reports and reports of breaches. But even before tackling that, Hewitt said it’s first necessary to create, or update, your vendor management policy, adding that harmonization is essential. “I recommend you blend together procedures based on the applicable regs from the relevant authorities such as the NY DFS, OCC, SEC, and FINRA. Your policy statement should provide vendors with appropriate guidance to ensure the bank’s security.”

An organization’s vendor risk management program should be matched by that of their vendors’. This ensures that any connected systems are taking the same security measures that you are, helping mitigate risk and shoring up inherent vulnerabilities. The vendor management policy and its purpose should be communicated to both staff and vendors so that all involved parties are on the same page, said Hewitt, who also echoed Sand’s thoughts regarding the importance of a risk assessment.

This analysis will identify and provide insight into what elements of risk exist, which often includes threats stemming from existing vendors. Hewitt outlined as series of specific steps banks should take, including recommendations on how to craft robust contracts, what detailed procedures vendors should be required to have, the management oversight and continuous monitoring practices every vendor program should include, and what types of records should be maintained. “Many banks are beginning to use new technologies such as robo-advisors, artificial intelligence, and blockchain, which all involve third-party vendors,” said Hewitt. “Before you begin to engage actively with a vendor in these areas, complete your assessment ahead of time, have management controls in place, and be able to analyze on a continuous basis or it could compound problems in the event you do have an intrusion.”

The panelists concluded with a caution that regulators recognize the burdens on financial institutions, but will take action when they deem that an organization has actively chosen not to comply with regulations or improve its security posture. They also agreed that while banks’ actions are often driven by compliance, achieving more mature security and the resilience it generates will require banks to look beyond checking the box.

Partner John Hewitt to Co-Host Webinar

  • Do you have an effective Vendor Management Policy?
  • Do you have an effective Vendor Due Diligence Questionnaire?
  • How frequently do you receive vendor compliance audits?
  • Do you have an internal Cyber Management Structure?
  • Do you have a CISO?  Who does s/he report to?

All financial institutions rely on third-party service providers. This introduces cybersecurity risks through connected systems and insider threats. Regulators are placing heightened scrutiny on third-party risk management. As the FDIC says, “a bank can outsource a task, but it cannot outsource the responsibility.”

On Thursday, December 12, 2017, P&D Partner John “Jack” Hewitt will co-host a webinar on cybersecurity.  Join our panel of highly-experienced financial and security experts to explore:

Cyber risks created by third parties

What regulators expect from community banks

How to confront cyber risks within your vendor risk management strategy

The webinar, entitled “Third-Party Cyber Risk: From Compliance to Enterprise Risk Management” will focus on cyber risks connected with financial institutions’ reliance on third-party service providers and the relevant regulatory requirements.

This webinar will address the increasing cyber risks involved in the use of third party vendors by banks. It will assess some of the most recent vendor breaches including the Scottrade Bank’s breach.  This will include a detailed review of Vendor Management Policies that provide guidance to ensure the security of a firm’s network when being used by its Vendors.  It will address all the applicable risk elements including compliance risk, strategic risk, operational risk and others.

Discussions will include due diligence in vendor selection including the development of an effective DDQ, the review of vendor’s cybersecurity program, their use of sub-contractors and insurance coverage.  The discussion will review the analysis of all outsourced processes, procedures, and practices relevant to bank’s business to be monitored on a regular basis.  This will encompasses all system resources that are owned, operated, maintained, and controlled by vendors and all other system resources, both internally and externally, that interact with these systems.

The panel will review vendor contract provisions that address: internal vendor controls, vendor audits, receipt of copies of all Vendor compliance audits, confidentiality and security procedures, encryption of PII, regulatory compliance, cyber-insurance coverage, business continuity planning, subcontracting, encryption, incident reporting, non-disclosure agreements, data storage, document retention and delivery, breach notification responsibilities, vendor employee access limitations, vendor obligations upon contract termination and an exit strategy.

Included in this will be a discussion of the NYS DFS Cybersecurity Regulation, the DFS Third-Party Service Provider Requirement.

For additional information and to register click here.

SEC Discusses New Cyber Unit to Combat Cyber-Related Misconduct

On October 26, 2017, Stephanie Avakian, Co-Director of the SEC’s Division of Enforcement gave a speech regarding Enforcement’s initiatives, in particular, regarding cybersecurity.

Ms. Avakian identified cybersecurity as one of the SEC’s “key priorities” necessitating a strategic focus and allocation of resources in order to fulfil the SEC’s “investor protection mission.”[1]  In order to effectuate these initiatives, the SEC created a Cyber Unit to combat cyber-related misconduct.[2]  According to Ms. Avakian, the increasing frequency coupled with the increasing complexity of these matters is what fueled the creation of the Cyber Unit.

The SEC identified three types of cases that have caught Enforcement’s interest:

Hacking to access material, nonpublic information in order to trade in advance of some announcement or event, or to manipulate the market for a particular security or group of securities;

Account intrusions in order to conduct manipulative trading using hacked brokerage accounts; and

Disseminating false information through electronic publication, such as SEC EDGAR filings and social media, in order to manipulate stock prices.[3]

Specifically addressing the second area of Enforcement’s interest, Ms. Avakian identified specific SEC Rules—Regulations S-P, S-ID, SCI, among others—which are risk based and, notably, flexible, that apply to failures by registered entities to take the necessary precautions to safeguard information.  These situations often involve coordination with OCIE, where the SEC will consult with OCIE at the outset in order to determine which entity is better suited to lead an investigation.

Interestingly, in efforts to combat the third area of Enforcement’s interest, the SEC  has not yet brought a case.  Despite identifying the importance of the disclosure requirements, Ms. Avakian states that “[w]e recognize this is a complex area subject to significant judgment, and we are not looking to second-guess reasonable, good faith disclosure decisions, though we can certainly envision a case where enforcement action would be appropriate”—seemingly indicating that of the three areas of interest, cyber-fraud in disclosures and the like may be of the least importance in Enforcement’s new cyber-initiatives.

The Cyber Unit will also spearhead the blockchain technology investigations, as the emerging issues in this area necessitate a “consistent, thoughtful approach.”  Although Initial Coin Offerings and Token Sales may be a new and legitimate platform to raise capital, this virtual currencies and offerings may also serve as “an attractive vehicle for fraudulent conduct.”[4]

Prior to the creation of the Cyber Unit, much of the cyber-related investigations have been led by the Market Abuse Unit, as there is a significant overlap between insider trading schemes and cyber-related schemes.  The risk, however, that cyber-related incidents pose is too great and, according to the SEC, warrants its own investigative unit.

[1] Stephanie Avakian, The SEC Enforcement Division’s Initiatives Regarding Retail Investor Protection and Cybersecurity, U.S. Securities and Exchange Commission (Oct. 26, 2017),

[2] Press Release 2017-176, SEC Announces Enforcement Initiatives to Combat Cyber-Based Threats and Protect Retail Investors (Sept. 25, 2017), available at

[3] Avakian, supra note 1.

[4] Avakian, supra note 1.