End user documentation

Clear instructions to help you navigate and use the platform with confidence.

Weak Supervision
See how we use weak supervision as a method to cleanse labels.
Read here
Versioning
See how versioning in cognition works.
Read here
Uploading data
See how to upload data into cognition.
Read here
Attribute visibility
Change the attribute visibility in the settings page.
Read here
Tmp doc retrieval
See how uploading pdfs while chatting can be used.
Read here
Teams
Put your users into teams to manage access for projects.
Read here
Role description
See which roles are supported in cognition
Read here
Reference data
This is the knowledge base a GenAI assistant can access.
Read here
Record IDE
Use the record IDE to quickly analyze a single record structure.
Read here
Project creation and data upload
See how to create projects in refinery and upload data
Read here
Neural search
Use refinery to identify outliers in your dataset.
Read here
Multi-user labeling
refinery can be used to label with multiple users and see the inter-annotator-agreement
Read here
Monitoring
You can use refinery to analyze and monitor your data.
Read here
Manual Labeling
See how you can manually label data in refinery
Read here
Managing roles
See how you can manage roles in refinery
Read here
Macros
See how to use macros (repeatable action sequences)
Read here
Labeling Tasks
See how you can create labeling tasks to label your data
Read here
Heuristics
See how you can use heuristics in refinery to automate data labeling
Read here
Evaluating heuristics
See how you can evaluate heuristics in refinery.
Read here
Environment Variables
Environment variables are required to connect to e.g. cloud LLMs.
Read here
Embedding Integration
Add embeddings directly from OpenAI, Azure, Hugging Face, Cohere or other embeddings.
Read here
Data Processing Documentation
On Cognition, there is a data processing pipeline known as ETL (Extract, Transform, Load). This pipeline handles the processing of data in the form of PDF files.
Read here
Data Management
Data management is a key feature of refinery. See how we enable it here.
Read here
Data Export
See how you can export the data in your refinery project
Read here
Conversations
Download and analyse conversations from a GenAI agent.
Read here
Configuration Page
See how you can set relevant data settings and labeling tasks for your project.
Read here
Comments
See how you can add comments to work better with colleagues or to set reminders for your own process.
Read here
Chat UIs
Use the pre-built chat UI to get your system up and running as fast as possible.
Read here
Bricks Integration
The refinery platform includes a bricks integration feature designed to simplify the process of identifying and importing available brick modules for specific use cases. Users are prompted by the integrator to either assign values to user-defined variables or modify the default variable values as necessary. Once these adjustments are made, the code is prepared for execution within the refinery environment.
Read here
Analytics and Monitoring
See how your GenAI agent performs and how you can improve it.
Read here
Agent's Behaviour
See how to configure the action steps an GenAI agent can do.
Read here
Agent Overview
See where to get started in cognition
Read here
Adding attributes
Refinery provides a feature where users can enhance their existing data by computing new attributes. This is achieved by writing Python code that manipulates the current data records, potentially utilizing external APIs, to produce and return new values for these added attributes.
Read here