Matyas.
ServicesProjectsExperienceBlogContact
CSGet in touch
Back to Dictionary
ai

Chain of Thought

Chain of Thought (CoT) is a prompting technique that encourages an LLM to break down complex reasoning into intermediate steps before arriving at a final answer. By explicitly reasoning through each step, models achieve significantly better accuracy on math, logic, and multi-step problems. Extended thinking and "thinking" tokens in models like Claude represent a built-in form of chain-of-thought reasoning.

#ai

Related Terms

Multimodal AI

Multimodal AI refers to models that can process and generate multiple types of data — such as text, images, audio, and video — within a single system. Models like GPT-4o and Claude can accept both text and image inputs, enabling use cases like visual question answering, document analysis, and UI understanding. This convergence is blurring the lines between previously separate AI disciplines.

Diffusion Model

A diffusion model is a type of generative AI that creates data by learning to reverse a gradual noise-adding process. During training, the model learns to progressively denoise random noise into coherent outputs like images, audio, or video. Diffusion models power tools like Stable Diffusion, DALL-E, and Midjourney, and have become the dominant architecture for high-quality image generation.

Neural Network

A neural network is a computational model inspired by the human brain, consisting of layers of interconnected nodes (neurons) that process data by adjusting weighted connections during training. Deep neural networks with many layers form the foundation of modern AI, powering everything from image recognition to language understanding. Common architectures include feedforward networks, convolutional networks (CNNs), and transformers.

Model Context Protocol

Model Context Protocol (MCP) is an open standard developed by Anthropic that defines how AI applications connect to external data sources and tools. MCP provides a universal interface for LLMs to access databases, APIs, file systems, and other services through standardized server implementations. It enables building AI applications that can interact with the real world in a structured, secure way.

Prompt Engineering

Prompt engineering is the practice of crafting and optimizing input instructions to guide AI models toward producing desired outputs. Techniques include few-shot examples, chain-of-thought reasoning, role assignment, and structured output formatting. Effective prompt engineering can dramatically improve the quality, accuracy, and consistency of LLM responses without modifying the underlying model.

Hallucination

In AI, hallucination refers to when a language model generates confident-sounding but factually incorrect or fabricated information. This occurs because LLMs predict statistically likely text rather than retrieving verified facts. Mitigation strategies include RAG, grounding responses in source documents, structured output validation, and using temperature settings to reduce creative deviation.

All Words
Matyas.

Web apps, mobile apps, AI automation. I help businesses save time and money with tech that actually works.

Links

  • Services
  • Projects
  • Experience
  • Blog
  • Dictionary
  • Contact

Coming Soon

  • Case StudiesSoon
  • Resources

© 2026 Matyas Prochazka. All rights reserved.