Artificial intelligence for word processing, such as that used in the Google search function, is increasing rapidly. Such systems can also be useful in evaluating customer feedback or responding to customer inquiries. In this context, we are primarily talking about Natural Language Processing (NLP), a specific sub-area of AI. This type of software is able to understand and process human language.
More and more companies are aware of the importance of consumer insights. The more you know about your customers, the market, and the use of your own products, the better decisions can be made. Nevertheless, the wrong or no data at all are often collected or insufficiently evaluated in this area. In order to get a real understanding of target groups and their wishes, qualitative data are necessary. Because only through qualitative data can quantitative results be put in the right context.
Accordingly, feedback in the form of user-generated content (UGC) is required. A company can generate this via social media, email inquiries or customer reviews. Without AI support, however, it is almost impossible to systematically analyze this type and amount of content. In addition, the subjective perception of an individual could distort the overall result.
At first glance, machine language processing may not sound very exciting. But you should keep in mind that our language is complex and unstructured. For people, this is only shown by possible misunderstandings in communication. However, basic processing is a major challenge for machines. The few rules that exist (grammar, punctuation, spelling) are only observed sporadically, especially in user-generated content. Appropriate programs are required to process unstructured data from chats or product reviews.
Reading tip: AI neglect – when the bot starts rusting
Thanks to machine learning (ML), or to be more precise deep learning (DL), modern AIs can learn languages independently (unsupervised). However, this requires a large amount of data from which the language can be learned. The more data, the better and more precise the AI can work.
- Facebook faces
Computers can learn to distinguish human faces. Facebook uses this for automatic face recognition.
- Machine learning
Contrary to what the picture suggests, machine learning is a sub-area of artificial intelligence – but a very important one.
Machine beats human: In 2016, Google’s machine learning system AlphaGo defeated the world champion in the game Go.
- GPUs GPU Nvidia
The leading companies in machine learning use graphics processors (GPUs) for parallel processing of data – for example from Nvidia.
- Deep learning
Deep learning processes first learn low-level elements such as brightness values, then elements on the middle level and finally high-level elements such as entire faces.
- IBM Watson
IBM Watson integrates several artificial intelligence methods: In addition to machine learning, these are algorithms for natural language processing and information retrieval, knowledge representation and automatic inference.
To make this decision, a company should work on the following three topics:
1. The problem: What purpose should the AI serve?
Is the problem that artificial intelligence is supposed to solve linked to a core function of your company or does it contribute to sales development? If this is not the case, you should consider whether you are going to take on the expense of developing your own program. The programs that make a company different from others and that make a significant contribution to asserting themselves on the market are developed in-house. In principle, a company can of course differentiate itself when buying a finished AI solution by choosing the right provider, since there are differences in the quality and functionalities of the AI.
2. The cost: How much does an operational AI cost?
While development costs have to be calculated for self-programmed artificial intelligence, these are offset by either license or product costs when using a finished solution. Especially with large and complex projects like NLP, which involve a high development risk, a large development team and many working hours are necessary. It can quickly take several months here. Long lead times (ramp up time) should also be expected in some cases. Software as a Service (SaaS) or purchase options are usually cheaper. In addition, these can be used more or less immediately. However, here too, a certain amount of in-house expertise is usually required, since the software is integrated into existing systems and the results have to be interpreted correctly. If you want to evaluate UGC, for example, a system would be helpful that can identify different topics and the associated moods in the content. Your AI should be able to do several tasks:
Part of speech tagging
Named entity recognition
Reading tip: Artificial intelligence – what distinguishes successful AI users
There are also further complexities in the selection of the right model architecture, the right algorithms and other AI components for the corresponding application.
3. The risks: Which option entails which risks?
It is not uncommon for software development projects to experience delays. This can cause costs to rise rapidly. Qualified personnel in this area are in short supply. Specialists are not only needed for the development, but also for the operation and ongoing development of the software. There is also the risk that the desired quality cannot be achieved.
Reading tip: Artificial intelligence – easy programming? Inconvenient AI truths
The purchase option can be argued in the opposite direction. Since the control of the source code is mostly with the provider, no ad hoc changes can be made. If, for example, a bug has crept into the system, the provider must first be contacted to correct the error. Such support can be a long time coming. There is also the question of whether the AI does all the tasks properly in practice. Are topics and sentiments found correctly? With what quality or accuracy (Accuracy), accuracy (Precision), hit rate (Recall), etc. the system works. After all, artificial intelligence solutions are basically probability systems “without” rules. Accordingly, there is no 100% reliability. But human inter-encoder reliability never reaches 100 percent, there are always differences in interpretation.
Another risk lies in t
he data itself and its location. Depending on whether the artificial intelligence and the required data are on premises or in the cloud, this affects the computing speed and the security of the data.
- Dr. Christoph Hönscheid, NTT Security
“Success is achieved if you have an overall strategy for protecting confidential data. Of course, the EU General Data Protection Regulation is an inevitable challenge that companies have to face. It can be an important impetus to really act on data protection. However, it is wise to look beyond compliance and regulation. An overall concept should firstly have legal requirements in mind, secondly obligations towards partners and thirdly the company’s own interests in protecting its digital property. This is the only way to create a solid basis for using the corresponding technologies. This includes DLP, file-based encryption such as digital rights management or tokenization. A data classification that ultimately bears the decision about these protective mechanisms must be a cornerstone of this overall concept. ”
- Christian Nern, KPMG
“Basically there are technical solutions or BI solutions to find out where the greatest protection needs are in companies. The most important thing is that the employees from the specialist departments are not only trained, but also know exactly what they can do with the data. This can be achieved much better by exchanging correct or incorrect behaviors or by example scenarios or subject-specific templates. In this way, you gradually come to a quality or security culture that every company needs for Security by Design in order to use AI in a targeted manner. “
- Marisa Parrilla, Horn & Company
“The cultural aspect must go beyond data governance and also take ethical aspects into account. Because a company shouldn’t do everything that is allowed under the GDPR. Data protection has a lot to do with trust and you don’t have to be afraid to create this transparency. Rather, companies have to integrate both aspects into a data strategy and thus an overall strategy in order to achieve long-term competitive advantages from the data. “
- Dr. Jean-Michel Lourier, Lufthansa Industry Solutions
“When it comes to data protection, there are two things to differentiate between: security and privacy. While the first is well positioned, the latter is still very uncertain. Due to the vagueness of the GDPR, you often don’t know exactly how far you have to go to be really compliant – and that’s the problem. As a result, you always try to be on the safe side, which means you miss out on many opportunities for data analytics. ”
- Stefan Zsegora, Telefónica Germany NEXT
“If ten data scientists ask the data protection officer at the same time whether what they are doing is ok, it will probably take two years until this is clarified. Therefore, on the one hand, an environment is needed in which the data scientist has legal certainty independent of the use case. For this reason, we have developed a special anonymization platform, for example, which provides exactly this security. On the other hand, you need certification bodies that certify transparently for everyone that what is done with the data is legally correct. Because in customer business in particular there is no chance if there is even a hint of prank in the air. ”
- Dominik Koch, Teradata
“Data analytics and data protection are not mutually exclusive, but always go hand in hand. Data scientists must therefore be familiar with the general and industry-specific guidelines for data protection and data security. In order to know which data they are allowed to work with and which they are not, they have to be trained accordingly. To do this, they have to work closely with IT security specialists and have access to their know-how in complex cases. “
In conclusion, one can say that there is unfortunately no general answer to this question. The best solution is always dependent on the individual objective. Here is a summary again:
Understand what problem the software should solve. Can this give your company an advantage and differentiate you from the competition? Or should everyday functions of your company be automated?
Do you understand the real cost of buying and developing software? These include: the team, license models as well as implementation and maintenance costs.
Do you understand the risks associated with purchasing or developing software?
If AI can contribute significantly to competitive advantages and the success of your company, you should accept its complexity and this develop yourself. This allows you to set your own standards. They are more flexible and learn more about the system, which makes it more versatile.
Buy products that do not directly contribute to the core task of your company or products whose quality level already meets all the requirements. Most AI systems do it too to buy are already optimized for quality, speed and scalability. So if you want to get as much as possible out of your data as quickly as possible, this option is right for you. (bw)