Brand Assets &
Company Information
Everything you need to write about Cognisoc. Download our logos, learn about our brand, and access ready-to-use company descriptions.
Logo
Our logo represents a spark of cognition — intelligence radiating outward, running everywhere. Download in SVG for best quality.
Favicon / App Icon
Square format for favicons and app icons
Logo Guidelines
Do
- Use the logo with adequate clear space around it
- Use official colors (indigo #6366f1 on dark, indigo on light)
- Scale proportionally — don't stretch or distort
- Use SVG for digital, PNG for print at high resolution
Don't
- Change the logo colors or add gradients
- Add effects like shadows, outlines, or glows
- Rotate or flip the logo
- Use the logo smaller than 24px height
Brand Colors
Accent
#6366f1
rgb(99, 102, 241)
Primary brand color, CTAs, highlights
Accent Hover
#818cf8
rgb(129, 140, 248)
Hover states, secondary accent
Background
#0a0a0f
rgb(10, 10, 15)
Primary dark background
Background Secondary
#12121a
rgb(18, 18, 26)
Cards, elevated surfaces
Text
#e4e4ef
rgb(228, 228, 239)
Primary text color
Text Muted
#8888a0
rgb(136, 136, 160)
Secondary text, descriptions
Typography
Inter
Primary typeface for headings and body text
Weights: 400, 500, 600, 700, 800, 900
Google FontsJetBrains Mono
Monospace typeface for code and technical content
Weights: 400, 500, 700
Google FontsAbout Cognisoc
One-liner
Cognisoc builds open-source tools for running large language models locally on any device, in any language.
Boilerplate
Cognisoc is building the full open-source stack for local LLM inference. From bare-metal unikernels to mobile apps, Cognisoc's projects enable developers to run large language models anywhere — without cloud APIs, without vendor lock-in, and without sending sensitive data off-device. The Cognisoc stack supports 47 model architectures, 7 GPU backends, and provides native bindings for 6 programming languages. All projects are open source under MIT or Apache-2.0 licenses.
Mission
"AI inference should run everywhere."
We believe large language models shouldn't be locked behind cloud APIs. Every device — from a mobile phone to a bare-metal server — should be able to run AI locally.
Key Statistics
The Cognisoc Stack
Modular inference runtime — the engine that powers everything. 47 architectures, hybrid KV cache, continuous batching.
Drop-in Ollama replacement with native language bindings. OpenAI and Anthropic API compatible.
100% on-device mobile inference via Flutter FFI. Vision, multimodal, tool calling, streaming.
Bare-metal unikernel. No OS overhead, direct hardware access, minimal footprint.
Educational LLM implementation. 285+ tests as documentation, 18 model families.