Pastore has been retained by the author of the leading text used at Harvard in the “Starting a Private Investment Firm” course to pursue business torts committed by his former AI Venture Capital Fund General Partner and its affiliated individuals. The defendants, spanning Colorado and Texas, are alleged to have purposefully manipulated the client to build the AI fund, and then cut him out of the carry and returns. The case is pending in the District of Connecticut.
Due to artificial intelligence’s (AI) significant impact on business operations, companies must stay informed on evolving data privacy and transparency regulations. Recent research shows a steady increase in global AI adoption, with 35% of companies incorporating AI into their operations and another 42% considering it. Furthermore, 44% of organizations strive to integrate AI into their existing applications and processes.
Discover how to start preparing for forthcoming AI regulations that will govern the ethical use of this technology. This will help avoid problems like legal issues, fines, damaged reputation and loss of customer trust.
On Oct. 30, 2023, the White House issued an executive order to manage AI risks and expanded on the voluntary AI Risk Management Framework released in January 2023. The directive aims to ensure the safe, responsible and fair development and use of AI. Federal authorities will evaluate AI-related threats and provide guidelines for businesses in specific industries according to the following timeline:
- Within 150 days of the date of the order: A public report will be issued on best practices for financial institutions to manage AI-specific cybersecurity risks.
- Within 180 days of the date of the order: The AI Risk Management Framework, NIST AI 100-1, along with other appropriate security guidance, will be integrated into pertinent safety and security guidelines for use by critical infrastructure owners and operators.
- Within 240 days of the completion of the guidelines: The Federal Government will develop and take steps to mandate such guidelines, or appropriate portions, through regulatory or other appropriate action. Also, consider whether to mandate guidance through regulatory action in authority and responsibility.
The Office of Management and Budget (OMB) released a new draft policy on Nov. 3, 2023. The policy is seeking feedback on the use of AI in government agencies. This guidance establishes rules for AI in government agencies. It also promotes responsible AI development and improves transparency. Additionally, it safeguards federal employees and manages the risks associated with AI use by the government.
Here are some approaches to consider when planning for the impending AI regulations:
Stay Well Informed
Constantly monitor the development of AI regulations at the local, national and international levels. Examine which regulations directly impact your company’s use of AI. Consult with legal counsel specializing in AI and technology law to thoroughly understand how it will affect your company. Also, become acquainted with core legal principles rooted in AI regulations.
Conduct a Risk Assessment
A risk assessment is crucial for compliance and reducing legal liability, especially with emerging AI regulations. Begin by analyzing your AI systems for possible violations of existing laws and regulations, including consumer protection, anti-discrimination and data privacy.
Since AI systems gather and process large quantities of personal data, data protection and privacy are concerns. Companies should assess whether their AI systems comply with applicable data protection laws, such as the California Consumer Privacy Act (CCPA).
Regarding anti-discrimination, companies should assess whether their AI systems are unbiased and initiate measures to mitigate any probable biases. Finally, create plans for any uncovered legal risks.
Create a Powerful Infrastructure
Determine whether existing procedures and policies sufficiently tackle AI development, deployment and usage. Make certain the right contractual agreements are in place with technology vendors, data providers and other stakeholders.
In compliance with pertinent data privacy regulations, create strong data governance procedures for collecting, storing and using personal data. Regularly monitor and audit AI systems to detect legal compliance issues. Lastly, develop a thorough plan for responding to potential legal events such as data breaches.
Partner with Legal Experts
A team of legal experts specializing in AI can help ensure that legal considerations are incorporated throughout the development and deployment process. Companies can lower their legal risk by partnering with an external legal counsel specializing in corporate AI and other technology areas, including cybersecurity.
In conclusion, addressing the legal aspects of AI improves compliance, and builds trust and confidence with stakeholders. Is your company legally protected in the AI-driven arena? For legal inquiries, please contact us at Pastore LLC.
This article is intended for informational purposes and does not constitute legal advice.
(Joseph M. Pastore III is chairman of Pastore, and focuses his practice on the financial services and technology industries, representing major multinational companies in state and federal courts, as well as before self-regulatory organizations such as FINRA, and government agencies such as the SEC.)