ChatGPT is based on an extensive Large Language Model (LLM) that can process an infinite amount of information. But even ChatGPT reaches its limits, especially when it does not understand questions. What happens then?
Self-awareness in the face of cluelessness
🎭 How does the AI behave when it answers questions it does not understand? There are numerous examples where AI tools confidently answer questions, but the answers are completely made up out of thin air.
🧠 In a world full of information and possibilities, it is not surprising that AI sometimes gets it wrong. The AI becomes inventive.
Small Language Models
🦾 Small Language Models (SLM) are powerful and perfectly tailored to specific applications. They do not require vast amounts of data, but work with high-quality, structured information, which increases their reliability🌐.
⚙️ In addition, SLMs are easier and faster to train due to their smaller size, which is crucial if AI is to be used on an everyday basis and in a resource-friendly manner.
💻 SLMs impress with their efficient data processing. With the help of knowledge graphs, they structure information from different sources, emphasise relevant entities and connections📈. This focus on relationships and networks enables language models to set clear boundaries and minimise errors.
Safety and security
🔐 An exciting aspect is the combination of SML and Federated Learning. It describes a model that is trained on multiple decentralised devices simultaneously. This means that the training data remains on the respective devices and does not end up in central data centres.
📁 This is a significant step towards better protecting personal data from unauthorised access and misuse.
SLM does not offer the endless wealth of answers and information as ChatGPT, but it scores with reliable and comprehensible answers. What is your opinion on this topic?