Skip to main content

AI Models: How to use them

Updated over a week ago

Available AI Models

LazyLines gives you access to 4 cutting-edge AI models. Each has different strengths and credit costs.

Gemini 3.0 Flash

  • Best for: Simple tasks, quick questions, research, when you want to save credits

  • Speed: Fastest

  • Cost: Cheapest (lowest credit consumption)

  • Context Window: Up to 1 million tokens (can handle very long conversations)

  • Use when: Doing basic research, asking simple questions, analyzing lots of content, or running low on credits

Claude 4.5 Sonnet ⭐ (Default)

  • Best for: Writing scripts, creative content, posts, emails - anything requiring authentic voice

  • Speed: Fast

  • Cost: Premium (higher credit consumption)

  • Strength: Best writing quality, understands nuance and tone, excellent at matching your brand voice

  • Use when: Creating final content, writing in your specific style, need highest quality output

GPT 5.2

  • Best for: Balanced performance for most tasks

  • Speed: Fast

  • Cost: Mid-range (between Gemini and Claude)

  • Strength: Great all-rounder, good at reasoning and analysis

  • Use when: You need quality but want to save some credits vs Claude

Grok 4

  • Best for: Creative tasks, unique perspectives

  • Speed: Fast

  • Cost: Premium (similar to Claude)

  • Strength: Fresh, creative approach to content

  • Use when: Want a different creative angle, experimental content

Model Comparison Chart

Model

Cost

Best For

Speed

Gemini 2.5 Flash

$

Research, simple questions

⚡⚡⚡

GPT 5.1

$$

General tasks, analysis

⚡⚡

Claude 4.5 Sonnet

$$$

Writing, creative content

⚡⚡

Grok 4

$$$

Creative, unique angles

⚡⚡


Understanding Model Limits

Each AI model has limits on:

  1. Context Window: How much text it can "remember" at once

  2. Output Length: Maximum length of a single response

Context Window Issues:

If you get an error about context limits:

  • Your conversation is too long

  • Messages, files, links all use context space

Solutions:

  1. Switch to Gemini 2.5 Flash - It has a 1 million token context window (massive)

  2. Start a new chat - Begin fresh when conversations get very long

  3. Be more concise - Break large tasks into smaller chats

Signs You're Hitting Limits:

  • Error message about token/context limits

  • AI responses getting cut off mid-sentence

  • Cannot add more files or links to chat


Credit-Saving Strategies

1. Use the Right Model for the Task

  • Research phase: Gemini 2.5 Flash

  • Writing phase: Claude 4.5 Sonnet

  • Don't use Claude for simple questions

2. Batch Your Requests Instead of:

  • "Write script 1"

  • "Write script 2"

  • "Write script 3"

Do:

  • "Write 3 scripts about [topic], each 60 seconds"

One response = credits saved.

3. Be Specific First Time Unclear requests lead to back-and-forth, wasting credits. Give all context upfront:

Bad: "Write me a script"

Good: "Write a 60-second TikTok script about morning routines for entrepreneurs, use an engaging hook, include 3 actionable tips"

4. Turn Off Brand Profile for General Questions If you're asking "What's the weather?" or "How does Instagram's algorithm work?" - turn off Brand Profile. No need to use those credits.

5. Use Canvas for Edits Instead of asking AI to regenerate entire scripts for small changes, use canvas editor for manual tweaks. Save credits for bigger revisions.

6. Monitor Your Usage Check your credit balance regularly in Settings. If running low:

  • Switch to Gemini for all tasks

  • Be more selective about profile analyses

  • Batch content creation

Did this answer your question?