law firm

Timo Kullerkupp (RASK): Artificial intelligence should be a competitive advantage rather than a minefield for startups

LIFT99 x RASK law firm
Our partner RASK, a top-tier Estonian law firm, offers free legal consultations to LIFT99 community members at the Telliskivi Hub every second Wednesday of the month. Today we feature an interview with Timo Kullerkupp, a partner at RASK.


Timo Kullerkupp: Artificial intelligence should be a competitive advantage rather than a minefield for startups. AI solutions often help smaller companies stand against larger competitors, believes Timo Kullerkupp, partner at RASK law firm. However, this is the case only if users are aware of the risks and follow legal regulations. Otherwise, legal and ethical issues may arise instead of competitive advantages.

How actively do you use AI yourself?

Like others at RASK, I use AI actively. Personally, I started to take a more serious interest in AI and use it when GPT-3 was released. Language is a core tool in a lawyer’s work, and the various large language model-based AI solutions (LLMs) offer us valuable assistance. The very first step should be to map business processes and clarify the places LLMs and other AI can feasibly be used, then after that, select the tools that meet those needs.

So you see AI more as an opportunity than a threat?

Above all, it is an opportunity for smaller companies to stand against their larger competitors. Skilful use of AI can support competitiveness. It also benefits service quality, as well as helps educate the customer, as many of their first questions can be answered even before contacting the law firm. However, AI is a powerful tool only when we know how to use it properly, and it is important to know the risks and differences that come up when using it. For example, one fundamental question about the future of our profession is how working hand-in-hand with AI can help you become a better and more valuable consultant. This question is relevant not only to law firms but also to any other consultancy profession.

There are various risks to AI users. What is the general level of awareness among users when it comes to the possibilities and risks of AI?

I dare say that general awareness is low. Many people have blind faith in technology even beyond AI and LLMs. Applications like Outlook or Microsoft servers are used in exactly the same way without real awareness of the risks they bring. For example, people don’t always consider how using these solutions essentially involves the disclosure of information to a third party. The much-discussed regulation of AI mainly concerns developers and those who are implementing or launching the products. For users, other bodies of law and the issues they address are critically important – for example, data confidentiality, personal data protection and intellectual property rights.

What should startups or early-stage growth companies definitely be aware of when using AI?

There are various excellent tools available for product development, process building and risk mapping. I encourage everyone to be daring and experiment more. However, it’s important to read the user terms first to know who will own the output and how it can be used. For example, in some cases, commercial use of anything the AI outputs is prohibited. In other cases, the developers retain ownership, and the user only receives a license to use it. So the terms of use can be very different, and it is important to read them to avoid breaching them. Also, it’s a common misconception that once you have a paid solution, you are permitted to enter any data into it. As a private consumer, you can get away with not reading the user terms, but businesses have a stricter duty of care.

How different is the situation for developers that use AI?

If you integrate an existing AI solution into a product, there are various licenses to consider. Therefore, it’s important to understand what your rights are when using the solution. You should also think about how the data moves, as well as how much data you provide and whether you retain it. A crucial issue is your overall technical plan, where the fundamental question is whether the terms of the customer contract actually allow you to implement it. If integrating an existing AI solution is a critical part of the business and the product, it is just as critical to map the associated risks. To mitigate risks, you should develop solutions that are agnostic about the AI model you use.

There are many kinds of information, such as personal data and trade secrets, that should never be disclosed to LLMs?

Exactly. If you are using any of the commercial LLMs and consumer products, you should definitely avoid entering personal data or any of your partners’ and clients’ trade secrets. Personally, I wouldn’t enter any copyrighted works that I have agreed to keep confidential or not disclose. Various restrictions may arise in law, including EU regulations and the GDPR. In addition, many contracts contain confidentiality clauses. Therefore, inputting such data when using an AI solution may be interpreted as disclosing it to a third party and, therefore, a breach of obligations.

You have emphasised the importance of staff training and guidelines. Where should one start with that?

You should start with the easy things and a preliminary discussion – agree on what you will do and what you will not do. The risks in your area of business need to be mapped out, and this should become a regular part of the company’s broader risk assessment. After that, you should prepare guidelines, keep them up-to-date and train the staff. Guidelines and staff awareness considerably reduce the risks.

Who actually owns our GPT conversations?

There is still no clear answer to that. It also varies with different tools. There are unresolved questions in legal theory here, such as who or what an author is and whether a machine or a legal entity can be one. It’s an acute problem, so the matter will have to be resolved soon, but to date, there hasn’t been a solution. At the moment, it’s best to follow the principle that everything that gets uploaded gets disclosed.

Should you mention using AI to your clients and business partners?

It should definitely be stated in the general terms of service so that the client or partner can give consent. The scope of AI involvement is another question. If AI just slightly improves a document you wrote, there is no need to mention it specifically. But if AI prepared the initial draft and you only added a little of your own work, it should definitely be mentioned. Ultimately, what matters is how much AI content there is in the final result and what potential risks it exposes the client or partner to. Transparency and awareness are important keywords to keep in mind.

The threat of fines should not be an end in itself, yet the fines associated with violations are something of a growing trend?


The government or a business partner may impose a fine or penalty. The government may impose fines for violations regarding GDPR and personal data processing. We will definitely see a significant increase in fines in the coming years, as the government badly needs money and the period of adaptation has dragged on for too long, even as businesses are systematically breaking the rules. Fines will certainly be used to influence behaviour. The basics of GDPR have not really taken root in the consciousness of people and entrepreneurs, and although sanctions are not the best educational tools, they do have a role to play.

More important than fines is the damage to reputation, which can have even more serious consequences?

Using AI gives a competitive advantage when done consciously and responsibly. At the same time, negligence can have consequences that cost more than any technological solution. For example, fines for personal data breaches can be up to 20 million euros. A fine can be paid, but recovering from the loss of reputation is much harder, especially for a small business.

In summary, what is the main takeaway and when is it wise to get legal advice?

Mapping risks is definitely the first point where it’s wise to bring in outside assistance. Often, people don’t understand what counts as personal data or trade secrets. It pays to know which regulations and obligations apply to the data you handle. Another point is that the requirements for accessing the data should be implemented. Overall, this should be a matter of business hygiene. And finally, if we take the field of digital services as an example, just over the past five years, a large number of EU-level regulations have been adopted that are now gradually coming into force. The volume of regulations is enormous, and they all need to be examined and understood. It’s unreasonable to think that each business could do this themselves, so it’s often better to bring in expertise from outside.