Key tips on AI for startups

Tech Transactions and Data Privacy Attorney Aaron Ogunro shares advice for healthcare startups

With the rapid increase in interest and advancements in artificial intelligence (AI) over the last year, many companies are considering if and how they should adopt AI, implement it into their businesses and build new AI solutions. We sat down with Aaron Ogunro, tech transactions and data privacy attorney at Polsinelli, to discuss what startups should consider regarding AI.

1. What should healthcare startups consider when developing an AI solution?

Many companies are now using AI both internally and externally, whether that’s for employment or other areas such as marketing. Most commonly, people are embedding it into their own products and solutions on a regular basis, whether they’re provided to individuals or companies. In both scenarios, there are four main areas companies should focus on when it comes to the use of AI.

The first is understanding any assessments that you have to do and making sure those assessments are aligned with regulatory and contractual requirements you might have with customers. Make sure you take into account privacy and security issues, explainability issues and biases that may occur from your use of AI technology — and you want to document all of this on a regular basis.

Second is confidentiality. You don’t want to be inputting proprietary information or any type of personal or medical record data into AI solutions. As we know, many individuals put information into generative AI tools on a daily basis, and you want to make sure that that information is not given in an unintended way to other third parties.

Third is transparency. A lot of laws require you to provide upfront notice to individuals that you’re using AI in your technology and your products, so it’s very important to be truthful about what you’re doing with AI.

Lastly, human oversight is very important. You can’t always rely on the output that AI provides, so a human should review what your generative AI is producing on a regular basis to make sure it’s accurate and good enough for your customers.

2. What regulatory frameworks currently exist around artificial intelligence, and in what direction do you see it heading in the future?

A great myth right now is that there are no regulations, that it’s the “Wild, Wild West” of AI, which is not really the case. Right now, the Federal Trade Commission in the U.S. has been very bold, and they’re saying that they’re going to address unfair and deceptive practices with the use of AI through the FTC Act.

It’s important for startups to understand that while there may not be a law in the U.S. that provides direct requirements on AI, there are still other regulatory frameworks on the state and federal level that can provide regulatory frameworks for the use of AI.

The FTC Act is the main one, so you have to make sure that you’re doing things in a fair and transparent manner when using AI. There are also a lot of state privacy laws that are in place that address similar issues such as automated decision-making or profiling, which is using these automated technologies to come to some sort of decision based on an individual or another type of data input.

You may also be subject to state laws. Right now, in Colorado, California, Virginia and Connecticut there are laws that you have to be mindful of regarding AI. And there are many other state laws that have been enacted that will come into effect in the next one, two or three years that you’ll need to keep in mind.

And the White House has also put out a Bill of Rights when it comes to the use of AI, which is primarily focused on transparent use of AI and any type of discrimination or biases that may come from it — and that’s just in the U.S. Obviously, if you have an international market, you also have to keep in mind the EU AI Act, which is going to come into effect in a couple of years, and other areas and other international jurisdictions that have similar laws that may come forward.

3. Can startups be held responsible if something goes wrong?

Yes, it’s definitely possible for startups or a company of any size to be held responsible for the use of AI, whether that’s through contractual measures, regulatory measures or even direct lawsuits that may arise from individuals. From an individual perspective, there are a lot of state laws that address the internal use of AI for promotion decisions or other types of hiring decisions on the city and state level, including providing notice if you have to provide opt outs.

You also have contractual obligations. If you’re selling your AI tool to another company, you have to keep in mind that they’re providing confidential information to these tools, especially if it’s going to be aggregated across all customers or across other public information.

4. AI solutions may evolve over time as they consume data and refine their algorithms. How often should startups reevaluate their solutions to ensure compliance?

There’s no set amount of time that startups should reevaluate their solutions. I would say every 6–12 months is an appropriate amount of time to be reviewing and assessing your AI models internally and externally.

On top of that timeframe, you should consider reassessing your AI technology to make sure it’s up-to-date with new laws every time you provide new versions or updates to your technology or shift how you’re using AI. This will enable you to change any contracts or notices on a regular basis, understanding that every version, every update, every time you change something internally, you should address that new assessment and understand the explainability factors that are behind the use of AI and the biases that may or may not be inherent in the use of that AI technology. Then you should make sure that your notices are aligned with what you’re using it for.

5. What other advice would you give to startups developing AI solutions?

The most important advice I can give to startups for AI solutions is making sure that they have an internal policy in place, whether they’re using it internally or whether they’re using it in their products and solutions. Have internal policies about the do’s and don’ts and how to use AI technology.

For example, make sure you don’t provide confidential information or your important source code into such technology. Do make sure that you’re following your internal policies, making ethical decisions and have some sort of human oversight over every decision that’s made.

This should all be put into paper to make sure that your personnel, employees know that you have this. And I wouldn’t be surprised in the future that if you have customers, they’re also going to require to see your AI usage policy as well. And on top of that, making sure there’s trainings associated with that policy will be important.

And the next the next most important thing is maybe creating an internal group that can handle the use of AI within your company to make sure there’s important stakeholders involved, whether that’s people from the C-suite, the board, other security personnel, anyone that may be involved in the development and use of AI solutions and tools. You want to make sure you have them in the room to make sure that they’re making the right decisions on the use of AI.

Interested in hearing from more Polsinelli lawyers? Learn more about data privacy and cybersecurity or intellectual property.


About Polsinelli

Polsinelli is an Am Law 100 firm with more than 1,000 attorneys in 22 offices nationwide. Recognized as one of the top firms for excellent client service and client relationships, Polsinelli is committed to meeting our clients’ expectations of what a law firm should be. Our attorneys provide value through practical legal counsel infused with business insight, offering comprehensive corporate, transactional, litigation and regulatory services with a focus on health care, real estate, finance, technology, private equity and life sciences. Polsinelli LLP in California, Polsinelli PC (Inc) in Florida.