Are Your Website Terms of Service & Privacy Policies Enforceable? Turns out not as much as you might think.

In late January, the N.D. of California (Chen J.) rendered a decision that may have wide impact on the enforceability of Website Terms of Service across the Internet and Metaverse.  Does this sound the death knell for “informed” click-through consent to either terms of use or data privacy policies?

It very well may….and a close review of your terms of service language and enforceable informed consent by the user is required to avoid the pitfalls that Meta Platforms, Inc. (“Meta”) literally brought upon themselves.

Meta brought a suit against Bright Data Ltd. (“Bright Data”) for breach of its Terms of Use.  Bright Data is an Israeli company that gathers information by “scraping” the web for Fortune 500 companies and other clients.    Judge Chen of the N.D. of Cal., found in favor Bright Data on cross motions for Summary Judgment testing the enforceability of Meta’s Terms of Service.

Judge Chen found that the Terms of Service did not apply to web-scraping activities conducted by Bright Data when Bright Data was not logged in.  Bright Data had FaceBook and Instagram accounts – but they were not bound by the Terms of Service unless they were using their Meta accounts to conduct web-scraping.  Meta was able to produce no such evidence.  Judge Chen did find that the survival clause of the Terms of Service such as choice of law and jurisdiction applied to non-logged-in users but otherwise sounded the death knell for Meta’s enforceable Terms of Service in their current form.

Judge Chen was skeptical about the placement and availability of the Terms of Service on Facebook and Instagram as well.  Although only dicta, these comments may also spurn new litigation on the enforceability of consent.  Informed Consent is a hot-button topic.  This decision, while still appealable – needs to be carefully considered and dissected to determine its effect on every company’s website policy and the user’s manifestation of consent.  For convenience this decision is attached.

Pastore, LLC stands ready to help companies review and adapt to the latest theories and developments in website governance and privacy polies.  Website Terms of Service and Privacy Policies need to be an ever-evolving collection of guidelines and control mechanisms and Pastore, LLC is a leading creative innovator and can assist you in a top to bottom review of your forward-facing websi

ESG Data Assurance Requirements: 10 Steps to Prepare for the Legal Implications

    Research shows a substantial percentage of companies are not prepared for the environmental, social and governance (ESG) data assurance requirements. Only 25% of companies feel they have the ESG policies, skills and systems in place to be ready for independent ESG data assurance. This is despite the fact that two-thirds of companies must disclose such data or will soon be expected to do so on a mandatory basis.

    One of the core challenges for companies planning for ESG assurance is a need for more internal skills and experience. Learn how these requirements will impact corporate and financial services companies. Plus, uncover the proactive steps your company can take to prepare for the legal implications of these requirements.

    Impact on Corporate and Financial Services Companies

     

    The ESG data assurance requirements create the following opportunities if handled correctly, in addition to challenges for corporate and financial services companies:

    Opportunities

    • Reduced risk and compliance costs: Proactive data quality management can help avert costly fines associated with regulatory non-compliance.
    • Competitive advantage: Companies prioritizing data assurance can distinguish themselves in the marketplace as trustworthy and reliable partners.
    • Improved decision-making: Trusted data results in better-informed decisions at all organizational levels—from product development and customer service—to risk management and compliance.
    • Enhanced trust and credibility: Strong data assurance processes can build trust with your customers and investors by committing to transparency and data integrity.

     

    Challenges

    • Evolving regulatory landscape: Keeping up with the ever-changing regulatory landscape, especially in areas like ESG reporting, can be exhaustive for your internal resources.
    • Increased costs and complexity: Implementing and maintaining effective data assurance programs requires an investment in technology, personnel and processes, which can be a financial and administrative burden on your company.
    • Lack of talent and expertise: This can have significant consequences for your company, resulting in operational challenges, inaccurate data, and increased costs and inefficiencies. Moreover, finding and retaining skilled professionals with data governance and assurance expertise can take time and effort.

     

    You can gain a competitive edge by preparing and leveraging the potential benefits. Conversely, the implications of non-compliance can be significant and multifaceted, from regulatory fines and penalties to negative brand perception.

    Key Steps to Prepare

    Here are some proactive steps you can take to prepare for the ESG data assurance requirements:

     

    1. Stay informed:Monitor emerging standards for ESG data assurance, including the proposed International Standard on Sustainability Assurance (ISSA) 5000 and legislative developments. Acquaint yourself with relevant regulations in your jurisdiction and industry.

     

    1. Conduct a risk assessment:Find areas where your ESG data collection, management and reporting practices might be vulnerable to legal risks because of possible inaccuracies.

     

    1. Develop robust internal controls:Establish strong data governance policies and internal controls to confirm data accuracy and consistency within your company.

     

    1. Invest in data management systems:Upgrade your technology and data infrastructure to assist in effective and trustworthy data collection, retrieval and storage.

     

    1. Examine disclosure obligations:Recognize your legal responsibilities for ESG data disclosure, both mandatory and voluntary, under stock exchange listing requirements and relevant regulations.

     

    1. Establish ESG reporting policies:Create thorough policies for ESG data collection, verification, aggregating and reporting. Ensure they support recognized standards and best practices.

     

    1. Provide training:Offer training for employees engaged in ESG data collection, management and reporting to guarantee compliance with internal policies and legal requirements.

     

    1. Consider independent assurance:Evaluate the need for independent third-party assurance of your ESG data to enhance stakeholder confidence and mitigate legal risks. Select reputable assurance providers who adhere to relevant standards and ethical codes.

     

    1. Conduct due diligence with suppliers and partners:Assess the ESG practices of your suppliers and partners to ensure alignment with your commitments and avoid reputational risks.

     

    1. Partner with legal experts: Consult with legal professionals specializing in ESG and sustainability to guarantee compliance with relevant laws and regulations and navigate potential legal risks associated with your ESG data disclosures. For legal inquiries, please contact us at Pastore LLC.

     

    By taking these proactive steps, you can begin to prepare for the evolving ESG data assurance requirements. The legal landscape is dynamic, so staying updated and adapting your strategies is crucial.

     

    This article is intended for informational purposes and does not constitute legal advice.

     

    (Julie D. Blake, JD, LLM, CIPP, CIPM, is an experienced commercial litigator and data privacy expert with expertise in cybersecurity, data privacy breaches, risk assessment and data privacy policy review.)

    Personal Financial Data Rights Rule: Strategies for Financial Institutions

    Financial institutions are vulnerable to the complex and dynamic regulatory landscape. Forty-two percent of organizations cited facing regulatory issues and compliance changes within the next 2-5 years as a top challenge. Financial institutions must be adaptable and remain informed on the latest industry regulations to operate effectively.

    An example is the new Personal Financial Data Rights rule (PDFR) the Consumer Financial Protection Bureau (CFPB) proposed on Oct. 19, 2023. The proposed rule is the first application to implement Section 1033 of the Consumer Financial Protection Act, which charged the CFPB with implementing personal financial data sharing standards and protections. The CFPB expects to cover additional products and services in future rulemaking.

    Currently in its notice-and-comment period, which will end on Dec. 29, 2023, the proposed rule would require depository and nondepository entities to:

    • Make some data regarding consumer transactions and accounts available to consumers and authorized third parties.
    • Establish obligations for third parties accessing a consumer’s data, including important privacy protections.
    • Provide basic standards for data access.
    • Promote fair, open and inclusive industry standards.

    The requirements would be implemented in phases, with larger providers being subject to them much sooner than smaller ones. Community banks and credit unions with no digital interface with their customers would be exempt from the rule’s requirements.

    If approved, this will profoundly change how financial institutions handle consumer’s financial data and present compliance challenges. Financial institutions failing to comply with the proposed PFDR rule could face legal ramifications such as civil penalties, cease-and-desist orders, reputational damage and consumer and data breach lawsuits. Specific legal implications will depend on the nature of the violation, consumer damage and relevant laws and regulations in effect at the time.

    Although the PFDR is still in the proposal phase and subject to change, it’s key for financial institutions to take steps to minimize risks.

    Here are some strategies to consider in preparation:

    Focus on Compliance

    To increase compliance, carefully review the PFDR rule and its requirements. Be sure to examine crucial areas such as data access rights, data use restrictions, data security standards and covered data. Review your current procedures and practices to determine which ones may not comply. Then develop a thorough implementation plan defining the actions to achieve compliance. This includes timelines, communication strategies and resource allocation.

    Take a Proactive Approach to Data Management

    Thoroughly evaluate any third-party service providers and vendors who access your customer data to ensure they comply with the PFDR rule’s data security and privacy requirements. In addition, clarify data access rights in user agreements and contracts with those parties. To limit third parties’ use and disclosure of data, apply contractual provisions.

    Additionally, boost your data security by applying robust cybersecurity actions. This will protect your customer data from unauthorized misuse and breaches. In a breach, be prepared with a well-defined incident response plan.

    Build Consumer Trust

    It’s imperative to communicate with your customers about what the rule is and what their data rights are, along with providing educational materials and other resources. To make certain your customers understand and approve how their data will be used and shared, provide detailed consent procedures.

    Restrict authorized third-party data usage by creating firm policies and verifying that the data will only be used for authorized purposes and not shared or sold without consent. Finally, employ effective processes for responding to customer complaints and inquiries concerning security and data access.

    Seek Legal Counsel

    Consulting with legal counsel with expertise in the financial services industry will help you navigate the PFDR rule complexities and ensure compliance. The specific legal approach will depend on your financial institution’s unique circumstances.

    Skilled legal counsel can address your concerns and increase compliance by:

    • Keeping you informed on developing regulations and providing guidance through existing changes to data procedures.
    • Providing guidance on how to comply with the rule while evaluating consumer privacy and data security concerns.
    • Addressing potential legal issues swiftly and effectively to mitigate risks.
    • Handling litigation risks and guarding against potential lawsuits.

    In summary, although the PFDR rule is still in its final development stages and it’s feasible that regulations may evolve, prepare by staying informed and adapting your strategies accordingly.

    By investing in legal counsel early on, you can leverage the expertise of professionals to mitigate risks, prevent costly mistakes and take advantage of the opportunities presented by this new regulatory landscape. For legal inquiries, please contact us at Pastore LLC.

    This article is intended for informational purposes and does not constitute legal advice.

    (Julie D. Blake, JD, LLM, CIPP, CIPM, is an experienced commercial litigator and data privacy expert with expertise in cybersecurity, data privacy breaches, risk assessment and data privacy policy review.)

    Preparing for the Impending AI Regulations: A Legal View

    Due to artificial intelligence’s (AI) significant impact on business operations, companies must stay informed on evolving data privacy and transparency regulations. Recent research shows a steady increase in global AI adoption, with 35% of companies incorporating AI into their operations and another 42% considering it. Furthermore, 44% of organizations strive to integrate AI into their existing applications and processes.

    Discover how to start preparing for forthcoming AI regulations that will govern the ethical use of this technology. This will help avoid problems like legal issues, fines, damaged reputation and loss of customer trust.

    On Oct. 30, 2023, the White House issued an executive order to manage AI risks and expanded on the voluntary AI Risk Management Framework released in January 2023. The directive aims to ensure the safe, responsible and fair development and use of AI. Federal authorities will evaluate AI-related threats and provide guidelines for businesses in specific industries according to the following timeline:

    • Within 150 days of the date of the order: A public report will be issued on best practices for financial institutions to manage AI-specific cybersecurity risks.
    • Within 180 days of the date of the order: The AI Risk Management Framework, NIST AI 100-1, along with other appropriate security guidance, will be integrated into pertinent safety and security guidelines for use by critical infrastructure owners and operators.
    • Within 240 days of the completion of the guidelines: The Federal Government will develop and take steps to mandate such guidelines, or appropriate portions, through regulatory or other appropriate action. Also, consider whether to mandate guidance through regulatory action in authority and responsibility.

    The Office of Management and Budget (OMB) released a new draft policy on Nov. 3, 2023. The policy is seeking feedback on the use of AI in government agencies. This guidance establishes rules for AI in government agencies. It also promotes responsible AI development and improves transparency. Additionally, it safeguards federal employees and manages the risks associated with AI use by the government.

    Here are some approaches to consider when planning for the impending AI regulations:

    Stay Well Informed  

    Constantly monitor the development of AI regulations at the local, national and international levels. Examine which regulations directly impact your company’s use of AI. Consult with legal counsel specializing in AI and technology law to thoroughly understand how it will affect your company. Also, become acquainted with core legal principles rooted in AI regulations.

    Conduct a Risk Assessment

    A risk assessment is crucial for compliance and reducing legal liability, especially with emerging AI regulations. Begin by analyzing your AI systems for possible violations of existing laws and regulations, including consumer protection, anti-discrimination and data privacy.

    Since AI systems gather and process large quantities of personal data, data protection and privacy are concerns. Companies should assess whether their AI systems comply with applicable data protection laws, such as the California Consumer Privacy Act (CCPA).

    Regarding anti-discrimination, companies should assess whether their AI systems are unbiased and initiate measures to mitigate any probable biases. Finally, create plans for any uncovered legal risks.

    Create a Powerful Infrastructure

    Determine whether existing procedures and policies sufficiently tackle AI development, deployment and usage. Make certain the right contractual agreements are in place with technology vendors, data providers and other stakeholders.

    In compliance with pertinent data privacy regulations, create strong data governance procedures for collecting, storing and using personal data. Regularly monitor and audit AI systems to detect legal compliance issues. Lastly, develop a thorough plan for responding to potential legal events such as data breaches.

    Partner with Legal Experts

    A team of legal experts specializing in AI can help ensure that legal considerations are incorporated throughout the development and deployment process. Companies can lower their legal risk by partnering with an external legal counsel specializing in corporate AI and other technology areas, including cybersecurity.

    In conclusion, addressing the legal aspects of AI improves compliance, and builds trust and confidence with stakeholders. Is your company legally protected in the AI-driven arena? For legal inquiries, please contact us at Pastore LLC.

    This article is intended for informational purposes and does not constitute legal advice.

    (Joseph M. Pastore III is chairman of Pastore, and focuses his practice on the financial services and technology industries, representing major multinational companies in state and federal courts, as well as before self-regulatory organizations such as FINRA, and government agencies such as the SEC.)

    (Julie D. Blake, JD, LLM, CIPP, CIPM, is an experienced commercial litigator and data privacy expert with expertise in cybersecurity, data privacy breaches, risk assessment and data privacy policy review.)

    Beyond Privacy Consent: How ‘Delete Act’ Changes Game for Companies

    Companies provide data privacy consent to consumers as part of a “safe harbor” practice, but time may be running out.

    After all, the common ritual of privacy consent is flawed.

    Let’s say a consumer goes online and wants access to some information on your company’s website. Up pops a window with a privacy consent form that needs a signature. The convoluted language seemingly goes on forever, but clicking a box for approval makes it all go away.

    Viola!

    Now, the consumer can review their long sought-after information by checking a box. But let’s stop right there.

    Private data, which is more valuable than oil these days, is a lot like medication. Yet, we don’t let people take medicine without prescriptions because we know people can’t possibly understand all the particulars of medical terminology and decide for themselves.

    In other words, we are putting privacy content into the hands of people who don’t understand it. Meanwhile, consumers are granting access to companies with legacy systems that may not have the ability to categorize the inventory—let alone identify it—even though the surging volume may rival the Library of Congress.

    The court of public opinion is catching on. In a recent poll from Pew Research Center, a majority of Americans are concerned about their privacy in the hands of companies:

    • 81% of US adults are concerned about how companies use the data collected about them.
    • 67% of US adults have little to no understanding of how companies use the data they collect about them.
    • 72% of Americans say there should be more regulation than there is now.

    Well, the people may get what they want, so companies should begin protecting their assets now. Remember, the rest of the Bill of Rights don’t count if you don’t have privacy. If you can’t say what you want to someone without it becoming public, then that is really a violation of your First Amendment rights. Everything flows from privacy—even though it is not written in the US Constitution.

    So why is the status quo changing for companies when it comes to privacy consent? One word: California.

    The Golden State’s Long Legislative Arm

    California Governor Gavin Newsom recently signed the Delete Act (Senate Bill 362) into law, which gives consumers the ability to have companies delete their personal information with a single request.

    The new law requires “data brokers”—companies that sell or rent the personal data that they collect from customers—to register with the newly created California Privacy Protection Agency (CPPA) public registry and disclose the information they collect from consumers, as well as ongoing opt-out requests.

    The Delete Act also charges CPPA to create a website and database where state residents can opt out from tracking and request data removal from a set process.

    From a consumer perspective, the new law creates a sea change in California. Currently, there isn’t a uniform approach for consumers to request data removal from a data broker. And once it happens, private information can resurface due to the nature of ongoing data collection.

    From a corporate perspective, the new law has a long reach. If California were its own country, it would have the fifth-largest economy in the world. In other words, it carries sway. In addition to data privacy, California has a long track record of influencing legislative issues involving labor, the environment and marijuana just to name a few.

    Since the CPPA was signed into law in 2018, another ten states have enacted comprehensive data privacy laws. Bloomberg Law reports that at least 16 states have introduced privacy bills that include protections for health and biomedical identifiers in the 2022-2023 legislative cycle.

    Of course, different states with different laws could motivate Congress to streamline data privacy on a national scale. Most likely, certain differences will be settled in a court of law, which is why an ounce of prevention now will be worth a pound of data.

    A Golden Opportunity for Companies

    The CPPA may have until January 1, 2026, to create a database that will allow quick data deletion, but companies should act now to get out in front of the new norm for doing business.

    While the government can step in and create a national system to safeguard data privacy, it would be best for companies to take the lead and show consumers how it can be done while protecting Corporate America’s most valuable assets.

    In the dawn of the new age of data privacy, companies need to go beyond providing data privacy consent. Instead, corporations need to set up their own internal systems—privacy by design—

    that documents where the data is being stored, how it is used and who has access to it.

    Most importantly, companies need to conduct internal reviews of their data inventory to make sure what they are using as privacy protection is actually providing protection. This is where the potential legal problem arises. If a company complies with the law in such a way that it is not complying—and management is unaware—the company will be accountable and pay the price, which could be steep.

    Moving forward, think about personal information like a book in the library. When someone needs it, it will need to be checked in and checked out. If someone wants to know my birthdate, there should be a record of who, why and when.

    Companies should work with a legal team with data-privacy experience that could conduct a privacy analysis of their existing processes and inventory. The outcome should be a report that identifies areas of exposure—possible causes of action—from the mindset of a plaintiff’s attorney, as well as recommendations to proactively address any looming surprises.

    As the notion of privacy is reimagined in a digital world, providing data privacy consent forms will no longer be enough to protect a company’s balance sheet.

    (Julie D. Blake, JD, LLM, CIPP, CIPM, is an experienced commercial litigator and data privacy expert with expertise in cybersecurity, data privacy breaches, risk assessment and data privacy policy review.)

    Connecticut’s Data Privacy Breach Notification Law Gets a Facelift

    As of October 1, 2021, Connecticut’s Data Privacy Breach Notification Act’s (“Act”) Amendments (“Amendments”) are in effect.  P.A. No. 21-59.  The Amendments:

    Expand the definition of “personal information;”
    Create extraterritorial jurisdiction;
    Remove the safe harbor provision while conducting an investigation;
    Lower the notification period from ninety to sixty days;
    Further detail notification methods and procedures; and
    Create safe harbors for those in compliance with the Health Insurance Portability and Accountability Act of 1996 (HIPPA) and the Health Information Technology for Economic and Clinical Health Act (HITECH).

    The new definition of personal information will require businesses to examine the types of data it stores and how it is stored.  The expanded definition of personal information now includes taxpayer identification numbers, IRS issued identity protection personal identification numbers, passport numbers, military identification numbers or any other commonly issued government identification numbers – in conjunction with the first name or initial and last name of the individual.

    Businesses storing COVID-19 vaccination records will also need consider the new definition because it expands coverage to medical information.  The expanded definition now includes medical information regarding an individual’s medical history, conditions (mental or physical), treatment, diagnosis; identifiers used by Health Insurance companies and biometric data – in conjunction with the first name or initial and last name of the individual.

    The Amendments also define a breach of security as including the disclosure of a username and password combination including an e-mail address or security question and answer that would provide online access to an account.  The Amendments require, in the event of a breach of login credentials, that (1) a notice informing the person whose information was breached to promptly change their credentials on other websites using the same credentials and (2) not to rely on an email account that was part of the breach to make such notice.

    The Amendments remove the limitations that required that: (1) persons subject to the Act must conduct business in Connecticut and (2) that the information subject to the Act be maintained in the ordinary course of business.  Theoretically, any business that stores personal information of a Connecticut resident is now subject to the Act.

    The Amendments remove the safe harbor provision allowing for an investigation after discovery of the breach before notification.  Companies now have only sixty-days from the discovery of the breach of security to notify Connecticut residents.   The Act also now includes an ongoing notification duty to Connecticut residents as well.

    The notification of affected persons may be avoided if after an “appropriate investigation” the person covered by the Act determines that no harm will befall the individual whose personal information was either acquired or accessed.  However, the sixty-day notification period still applies and any “appropriate investigation” would need to be completed before the duty to notify is triggered.  Furthermore, the Act no longer requires that the information be both acquired and accessed.  Simple acquisition is enough as well as is a brief intrusion into unencrypted protected personal information stored on a secured network.

    Finally, the Amendments create a safe harbor for persons in compliance with HITECH and HIPPA privacy and security standards so long as notice to the attorney general is provided.  Materials and information provided to the attorney general are exempt from public disclosure except when provided by the attorney general to third parties for the purpose of furthering an investigation.

    The Amended Data Privacy Breach Notification Act is much more onerous to comply with and best practices include having a breach notification plan that can be used at a moment’s notice, creating an inventory of personal information stored by the entity and, encrypting all personal data.  Encrypting personal information remains the best way to comply with Act but the risk of non-compliance can be high since non-compliance is considered a Connecticut Unfair Trade Practices violation which can result in compensatory and punitive damages as well as attorney’s fees.

    Data-Centric Security Strategies and Regulatory Compliance

    In the wake of a recent spate of cybersecurity breaches, the practice of data-centric security has received renewed attention from business leaders concerned about the integrity of critical data. As defined by a PKWare white paper, data-centric security focuses on protecting data itself, rather than the systems that contain it.1 Central to the concept of data-centric security is the notion that the systems established to store and guard data sometimes crumble in the face of cyberattacks.1 Given that all manner of data storage systems have shown themselves to be vulnerable, it is hard to argue with this foundational principle. Rather than offering prescriptions for the improvement of systems, then, data-centric security places safeguards around the data itself – safeguards which are automatically applied and regularly monitored to ensure data security.1

    Data-centric security strategies have several key advantages over the “network-centric” models currently employed by many firms.2 As discussed, data-centric strategies account for the proclivity of security networks to succumb to cyberattacks by securing the data itself. In addition, because security measures are built into data, “security travels with the data while it’s at rest, in use, and in transit,” a characteristic of data-centric strategies that facilitates secure data sharing and allows firms to move data from system to system without having to account for inevitable variations in security infrastructure.3 Moreover, data-centric security allows for easy access to data (a cornerstone of productivity in any firm) without compromising data security. In fact, Format-Preserving Encryption (FPE) – the specific type of encryption employed by many data-centric strategies4 – “maintains data usability in its protected form,” striking a balance between security and accessibility.5 Clearly, data-centric strategies provide stronger, more all-encompassing, and eminently manageable modes of data protection.

    But perhaps the most important aspect of data-centric security is its essential role in any security regime compliant with New York State cybersecurity regulations. In fact, as the data security company Vera has noted, “the new rules are focused not just on protecting information systems but on securing, auditing and the disposition of data itself.”6 New York’s determination to advance data-centric security is evident in certain provisions of the recent cybersecurity regulation, the most important of which mandate that companies “restrict access privileges not only to systems but to the data itself.”6 Moreover, New York State’s cybersecurity regulations reflect the priorities of data-centric security because they require firms to “implement an audit trail system to reconstruct transactions and log access privileges,” a system which allows the security of individual pieces of data to be monitored automatically.6 New York regulators have already recognized the benefits of data-centric security strategies. Now, with the assistance of legal experts well-versed in cybersecurity compliance, companies concerned about their data security can too.

    ____________________________________________________________________________________

    1. https://pkware.cachefly.net/webdocs/pkware_pdfs/us_pdfs/white_papers/WP_Data_Centric_Security_Blueprint.pdf
    2. https://www.symantec.com/blogs/expert-perspectives/data-centric-security-changing-landscape
    3. https://www.comforte.com/fileadmin/Collateral/comforte_FS_tokenization_vs_FPE_WEB.pdf?hsCtaTracking=8a3a11b3-5ba3-4e1a-a41f-78bb92d22458%7C358952c5-4dff-4793-bbeb-8835361c3b14
    4. https://www.1stmarkets.de/en/blog/blog-article-3
    5. https://www.techpowerusa.com/wp-content/uploads/2018/03/MicroFocus.Techpower-Big-Data-eBook-2018-9434.pdf
    6. https://www.vera.com/wp-content/uploads/2018/02/Veras-Guide-to-the-NY-DFS-Regulations.pdf

    Cybersecurity Compliance Could Have Saved Capital One Millions

    A recent cybersecurity breach involving one of the country’s largest financial services firms illustrates both the necessity of strong cybersecurity regulations and the imperative for credit card holders to jealousy safeguard their personal information. In a criminal complaint filed July 29th, 2019 at the U.S. District Court for the Western District of Washington, the federal government alleged that Paige A. Thompson, a computer engineer, had taken advantage of a gap in Capital One’s cloud security to obtain the personal financial records of millions of the company’s customers in the U.S. and abroad.1 Thompson, who used the online alias “erratic,” allegedly exploited a defect in Capital One’s firewall to access confidential financial information stored on the servers of the Cloud Computing Company, a Capital One service provider.1 Despite Capital One’s claim that “no credit card account numbers or log-in credentials were compromised and less than one percent of Social Security numbers were compromised,” the episode is a reminder that without robust cybersecurity measures and a broad-based commitment to personal data security, information stored with American financial institutions remains vulnerable to cyberattack.2 In fact, had Thompson been more careful to remain anonymous,3 the data breach could well have become catastrophic.

    First, the data breach demonstrates the value of robust cybersecurity regulations. For example, if Capital One’s cybersecurity measures had met the stringent standards of the regulations issued by New York State’s Department of Financial Services that is now being enforced by the state’s new Cybersecurity Division, this problem may have been avoided. The DFS has committed itself to ensuring that “encryption and other robust security control measures” characterize the cybersecurity policies of the state’s financial services firms.5 Had Capital One encrypted or tokenized6 all of the data subject to the recent breach, it is possible that the effects of the cyberattack may have been less widespread. In fact, the criminal complaint against Thompson notes that “although some of the information” targeted by the cyberattack “has been tokenized or encrypted, other information[…]regarding their credit history has not been tokenized,” allowing “tens of millions” of credit card applications to be compromised.1 Of course, the cybersecurity regulations adopted by New York State are burdensome. But the alternative is even worse – especially considering that Capital One will “incur between $100 million and $150 million in costs related to the hack, including customer notifications, credit monitoring, tech costs and legal support,” a price tag that doubtless outstrips the costs of regulatory compliance.3

    Pastore & Dailey is a leading firm in the drafting and implementation of procedures necessary to comply with federal and state securities and banking cybersecurity regulations and laws, which in this case could have saved Capital One millions if properly followed.

    Second, the cyberattack bears out the importance of diligence in safeguarding financial information. According to Forbes, individuals worried about the security of their financial information can take a host of precautions: “[updating] passwords,” avoiding the use of e-mail accounts to share confidential information, “[establishing] two-factor authentication,” and so on.7 Cyberattacks like the one that recently struck Capital One have become a fact of life for many Americans who bank online, but they need not be costly. Common-sense precautions and security diligence can go a long way towards ensuring the integrity of your financial records.

    New DFS Cybersecurity Division

    Perhaps as a signal of its commitment to fight cybercrime and stringently enforce its cybersecurity regulations, New York State recently established a “cybersecurity division”1 within the state’s Department of Financial Services (DFS). The creation of the division marks yet another step taken by New York State to guard against the dangers posed by cyberattacks, perhaps motivated by its status as the home of many prominent financial services firms. In addition, the presence of the division strongly suggests that the cybersecurity regulation2 issued by DFS in Spring 2017 [WB1] cannot be taken lightly by the state’s largest and most important financial services firms. Aside from the comprehensive nature of the regulation and the sizable power afforded to the new cybersecurity division, the novelty of New York’s recent innovations in cybersecurity regulation suggests their importance and staying power. In fact, as JDSupra notes, the creation of the new division more or less completed a years long process that has made “New York[…]the only state in the country that has a banking and insurance regulator exclusively designated to protect consumers and companies from the ever-increasing risk of cyber threats.”1

    Some financial services firms, conscious of their vulnerability to cyberattacks, will doubtless welcome these additional steps. As a report from the Identity Theft Resource Center notes, financial services firms “are reportedly hit by security incidents a staggering 300 times more frequently than businesses in other industries.”3 Far from being mere annoyances, these cyberattacks are often extremely costly. In fact, according to a study from IBM and the Ponemon Institute, the cost to a financial services firm per record lost in a cyberattack was more than $100 greater than the cost to the average company.4 Moreover, cyberattacks can also cripple consumer confidence in financial services firms, causing them to lose business and endure even greater costs.5 In general, then, cyberattacks can damage both a financial services firm’s sensitive records and its public image, making them a grave threat to any such company’s bottom line.

    It would be a mistake, however, to think about DFS regulation purely in terms of cost reduction. Regulation also entails costs – not least because compliance with the 2017 regulation can be investigated and punished by DFS’ new cybersecurity division. In fact, these new developments indicate that cybersecurity will not come cheaply, especially because the regulation imposes a bevy of new security requirements on top firms, costing them a not insignificant amount of time and money. From multi-factor authentication to training programs to the appointment of a “Chief Information Security Officer,” the now fully enforceable regulation will force financial services firms to foot the bill for a host of cybersecurity measures.6

    1. https://www.jdsupra.com/legalnews/new-york-creates-cybersecurity-division-20881/
    2. https://www.dfs.ny.gov/docs/legal/regulations/adoptions/dfsrf500txt.pdf
    3. https://www.idtheftcenter.org/wp-content/uploads/2019/02/ITRC_Generali_The-Impact-of-Cybersecurity-Incidents-on-Financial-Institutions-2018.pdf, pg. 3
    4. IBM and the Ponemon Institute, The Cost of a Data Breach (2017), summarized in https://www.idtheftcenter.org/wp-content/uploads/2019/02/ITRC_Generali_The-Impact-of-Cybersecurity-Incidents-on-Financial-Institutions-2018.pdf, pg. 6
    5. https://www.idtheftcenter.org/wp-content/uploads/2019/02/ITRC_Generali_The-Impact-of-Cybersecurity-Incidents-on-Financial-Institutions-2018.pdf, pg. 8
    6. https://www.dfs.ny.gov/docs/legal/regulations/adoptions/dfsrf500txt.pdf, pg. 5

    SEC Discusses New Cyber Unit to Combat Cyber-Related Misconduct

    On October 26, 2017, Stephanie Avakian, Co-Director of the SEC’s Division of Enforcement gave a speech regarding Enforcement’s initiatives, in particular, regarding cybersecurity.

    Ms. Avakian identified cybersecurity as one of the SEC’s “key priorities” necessitating a strategic focus and allocation of resources in order to fulfil the SEC’s “investor protection mission.”[1]  In order to effectuate these initiatives, the SEC created a Cyber Unit to combat cyber-related misconduct.[2]  According to Ms. Avakian, the increasing frequency coupled with the increasing complexity of these matters is what fueled the creation of the Cyber Unit.

    The SEC identified three types of cases that have caught Enforcement’s interest:

    Hacking to access material, nonpublic information in order to trade in advance of some announcement or event, or to manipulate the market for a particular security or group of securities;

    Account intrusions in order to conduct manipulative trading using hacked brokerage accounts; and

    Disseminating false information through electronic publication, such as SEC EDGAR filings and social media, in order to manipulate stock prices.[3]

    Specifically addressing the second area of Enforcement’s interest, Ms. Avakian identified specific SEC Rules—Regulations S-P, S-ID, SCI, among others—which are risk based and, notably, flexible, that apply to failures by registered entities to take the necessary precautions to safeguard information.  These situations often involve coordination with OCIE, where the SEC will consult with OCIE at the outset in order to determine which entity is better suited to lead an investigation.

    Interestingly, in efforts to combat the third area of Enforcement’s interest, the SEC  has not yet brought a case.  Despite identifying the importance of the disclosure requirements, Ms. Avakian states that “[w]e recognize this is a complex area subject to significant judgment, and we are not looking to second-guess reasonable, good faith disclosure decisions, though we can certainly envision a case where enforcement action would be appropriate”—seemingly indicating that of the three areas of interest, cyber-fraud in disclosures and the like may be of the least importance in Enforcement’s new cyber-initiatives.

    The Cyber Unit will also spearhead the blockchain technology investigations, as the emerging issues in this area necessitate a “consistent, thoughtful approach.”  Although Initial Coin Offerings and Token Sales may be a new and legitimate platform to raise capital, this virtual currencies and offerings may also serve as “an attractive vehicle for fraudulent conduct.”[4]

    Prior to the creation of the Cyber Unit, much of the cyber-related investigations have been led by the Market Abuse Unit, as there is a significant overlap between insider trading schemes and cyber-related schemes.  The risk, however, that cyber-related incidents pose is too great and, according to the SEC, warrants its own investigative unit.

    [1] Stephanie Avakian, The SEC Enforcement Division’s Initiatives Regarding Retail Investor Protection and Cybersecurity, U.S. Securities and Exchange Commission (Oct. 26, 2017), https://www.sec.gov/news/speech/speech-avakian-2017-10-26#_edn2.

    [2] Press Release 2017-176, SEC Announces Enforcement Initiatives to Combat Cyber-Based Threats and Protect Retail Investors (Sept. 25, 2017), available at https://www.sec.gov/news/press-release/2017-176.

    [3] Avakian, supra note 1.

    [4] Avakian, supra note 1.