Fund robots can make advice more objective



[ad_1]

Financial advisers often have an advantage in recommending products to consumers, as the products are often difficult to understand. Finansinspektionen states this as a precedent for the authority to prioritize examining the advice in more detail during the year.

“In our supervision, we have seen that many advisers have difficulty managing the conflict of interest that arises when companies receive commissions to mediate certain financial products,” the authority writes in a document.

At the same time, a digital transition is taking place in the financial sector. The Lysa fund robot, for example, automatically makes decisions about how to invest the client’s money, while the Opti service automatically reminds clients that they want to withdraw money in the event of a temporary drop in the stock market which may be prudent wait.

Regardless of whether it’s a robot or a human, they must adhere to the same regulations, according to Erik Thedéen, CEO of the Swedish Financial Supervisory Authority. But well-designed, automated counseling can be more objective, you think.

– I do not know if we have expressed an official opinion, but I can give my opinion on the phenomenon, he says and continues:

– I think there may be a bit more reasonable advice than what you do in a conversation where there is always the risk that the person giving the advice will look at what you earn money with rather than what is good for the client. But it depends on the parameters that are set on that robot. If and when we review that type of advice, it is the design of the robot that should be the most important thing.

That a robot risks making it financial the product is even more complex and difficult to understand, Erik Thedéen is not worried.

– Of course it can be difficult to understand how a robot thinks, but turn it around. Is it completely obvious how a human advisor thinks when giving advice to a consumer? I probably mean that they both have their difficulties.

The basic problem, he says, is that there is a contradiction between the current regulations and the business logic of companies, more specifically that an expensive fund can be good for the advisor but bad for the client.

– That dilemma does not get easier because it is a human being who does it and not a robot. In any case, I don’t make that argument directly, but then you have to look from case to case, says Erik Thedéen.

Could it be relevant for you to look more specifically at this type of service?

– I see that it is part of our analysis of the advisory system. Should it arise in our supervisory measures or in our supervisory dialogue, it is so natural to look at it as human.

[ad_2]