Matyas.
ServicesProjectsExperienceBlogContact
CSGet in touch
Back to Dictionary
ai

Reinforcement Learning

Reinforcement learning (RL) is a machine learning paradigm where an agent learns optimal behavior by interacting with an environment and receiving rewards or penalties. RLHF (Reinforcement Learning from Human Feedback) is a key technique used to align LLMs with human preferences, making their outputs more helpful and safe. RL is also behind breakthroughs in game-playing AI and robotics.

#ai

Related Terms

Neural Network

A neural network is a computational model inspired by the human brain, consisting of layers of interconnected nodes (neurons) that process data by adjusting weighted connections during training. Deep neural networks with many layers form the foundation of modern AI, powering everything from image recognition to language understanding. Common architectures include feedforward networks, convolutional networks (CNNs), and transformers.

Natural Language Processing

Natural Language Processing (NLP) is a branch of AI focused on enabling computers to understand, interpret, and generate human language. NLP powers applications like chatbots, translation services, sentiment analysis, and text summarization. Modern NLP has been transformed by transformer-based models, which achieve remarkable performance on tasks that previously required extensive hand-crafted rules.

Computer Vision

Computer vision is a field of AI that trains machines to interpret and understand visual information from images and videos. Applications include object detection, facial recognition, autonomous driving, and medical image analysis. Modern computer vision leverages deep learning models like CNNs and vision transformers (ViT), and increasingly integrates with language models in multimodal AI systems.

Embedding

An embedding is a dense numerical vector representation of data — such as text, images, or code — in a high-dimensional space where semantically similar items are positioned closer together. Embeddings are fundamental to semantic search, recommendation systems, and RAG pipelines. They are generated by specialized models and typically stored in vector databases for efficient similarity lookups.

Context Window

A context window is the maximum amount of text (measured in tokens) that an LLM can process in a single interaction, encompassing both the input prompt and the generated output. Larger context windows allow models to handle longer documents, maintain extended conversations, and reason over more information at once. Context window sizes have grown rapidly — from 4K tokens in early GPT models to over 1M tokens in current models like Claude.

Prompt Engineering

Prompt engineering is the practice of crafting and optimizing input instructions to guide AI models toward producing desired outputs. Techniques include few-shot examples, chain-of-thought reasoning, role assignment, and structured output formatting. Effective prompt engineering can dramatically improve the quality, accuracy, and consistency of LLM responses without modifying the underlying model.

All Words
Matyas.

Web apps, mobile apps, AI automation. I help businesses save time and money with tech that actually works.

Links

  • Services
  • Projects
  • Experience
  • Blog
  • Dictionary
  • Contact

Coming Soon

  • Case StudiesSoon
  • Resources

© 2026 Matyas Prochazka. All rights reserved.