Generate Text with Reasoning Generative AI (Preview)

Definition

The "Generate Text with Reasoning Generative AI (Preview)" action allows users to generate responses and insights using advanced reasoning AI models. This action leverages AI models hosted in Zenphi’s U.S. data center, requiring no additional setup.

Key capabilities include:

  • Generating text-based responses with enhanced reasoning.
  • Supporting multiple AI models (e.g., DeepSeek-R1, GPT-01-mini).
  • Providing a step-by-step reasoning process (Chain of Thought) for transparency.
  • Customizing generation settings like temperature, max tokens, penalties, and JSON output.

This action is ideal for AI-driven decision-making, automated content generation, and structured reasoning tasks.


Example Use Cases

1. Automated Report Generation

Generate structured reports with reasoning-based insights, summarizing complex data for decision-making.

2. Customer Support Automation

Provide AI-driven responses to customer queries with logical step-by-step explanations.

3. AI-Powered Content Creation

Generate articles, summaries, or creative text while maintaining a consistent tone and style.

4. Code Assistance & Debugging

Use AI to explain code logic, suggest improvements, or debug issues with structured reasoning.

5. Decision-Making Support

Generate AI-powered recommendations by analyzing different factors and providing logical justifications.

6. Education & Learning Assistance

Provide detailed explanations and step-by-step reasoning for complex topics to assist learners.

7. Legal & Compliance Document Analysis

Use AI to interpret legal documents and summarize key clauses with reasoning-based outputs.

8. Research & Data Analysis

Process large volumes of data and generate meaningful insights with AI-backed logical reasoning.


Inputs

Models

This action supports different AI reasoning models, each with unique capabilities:

  • DeepSeek-R1 – A powerful open-source model optimized for logical reasoning and step-by-step decision-making. Ideal for structured problem-solving and detailed AI-generated insights.
  • GPT-01 Mini – A lightweight yet efficient model designed for quick text generation with contextual understanding. Best suited for creative writing, summarization, and chatbot applications.

Examples

The Examples section allows users to guide the AI model by providing sample input-output pairs. This helps refine the AI's responses based on specific expectations.

  • Clicking "Add New Example" lets users customize the AI's behavior by providing their own sample inputs and desired outputs.
  • By including relevant examples, users improve the accuracy and consistency of AI-generated responses, making the output more aligned with their needs.

System Instructions

This optional field allows users to define guidelines that influence the AI model’s behavior during execution. It can be used to specify:

  • Tone & Style – e.g., formal, casual, technical, or conversational.
  • Response Format – e.g., bullet points, paragraphs, or structured JSON.
  • Specific Constraints – e.g., avoid certain topics, provide concise answers, or ensure neutrality.

Prompt

The Prompt is the main input that drives the AI's response. It defines the task, question, or instruction the AI must process.

  • It can be as simple as a direct question (e.g., "Summarize this article.") or as detailed as a multi-step task (e.g., "Write a formal email inviting a client to a product demo.").
  • Well-structured prompts improve AI-generated responses, making them more accurate and relevant.

Generation Config (Detailed Explanation)

The Generation Config settings allow you to fine-tune how the AI model generates text, balancing creativity, coherence, and structure.

1. Maximum Length

  • Specifies the maximum number of tokens the model can generate in its response.
  • Tokens can be words, subwords, or characters, and in English, one token is roughly four characters or 0.75 words.
  • Includes the prompt, examples, and output, so a larger prompt reduces available space for the response.

2. Temperature (Randomness Control)

  • Adjusts the randomness and creativity of the AI's responses.
  • Lower values (e.g., 0.0 - 0.3) → More deterministic and consistent, good for factual answers.
  • Higher values (e.g., 0.8 - 2.0) → More varied, exploratory, and creative, useful for storytelling and brainstorming.

3. Top-p (Nucleus Sampling)

  • Controls token probability selection by limiting AI choices to the most likely words.
  • Lower values (e.g., 0.2 - 0.5) → More precise and focused responses.
  • Higher values (e.g., 0.8 - 1.0) → More diverse responses, allowing less probable words to appear.

4. Presence Penalty (Discourages Repetition)

  • Encourages AI to generate new ideas instead of repeating previous content.
  • Positive values (e.g., 1.0 - 2.0) → AI avoids reusing words and phrases, improving variety.
  • Negative values (e.g., -2.0) → AI is allowed to repeat words more freely.

5. Frequency Penalty (Reduces Overused Words)

  • Penalizes tokens based on how frequently they have already appeared in the output.
  • Higher values (e.g., 1.0 - 2.0) → Limits word repetition, making the response more unique.
  • Lower values (e.g., -2.0) → Allows more repeated words if needed.

6. Output in JSON

  • Forces the AI to format its output in JSON, making it machine-readable for structured applications.
  • Useful for automations, APIs, and integrations, but not all models support JSON formatting.

How These Settings Work Together:

  • Temperature + Top-p → Control randomness vs. focus in token selection.
  • Presence Penalty + Frequency Penalty → Balance word diversity vs. repetition.
  • Maximum Length → Determines output length and token limits.

With these settings, you can fine-tune AI behavior for precise, controlled, or creative responses, depending on your use case.



Outputs

The "Generate Text with Reasoning Generative AI (Preview)" action provides the following outputs:

1. Generated Output

This is the final response generated by the AI model based on the provided prompt, system instructions, and generation settings.

  • The response can be text-based or, if JSON output is enabled, structured data.
  • The quality and length of the output depend on the temperature, top-p, and other generation config settings.

2. Reasoning Process

Displays the step-by-step reasoning process (also known as Chain of Thought reasoning) that the AI model followed to generate the output.

  • This can be useful for understanding AI decision-making, debugging responses, or ensuring transparency in AI-generated content.
  • Note: Not all AI models support this feature. If unavailable, this field may be empty.

3. Model Name

The name of the AI model used for processing the request.

  • This helps track which model was used, especially if multiple models are available for selection.

4. Total Tokens

The total number of tokens consumed during execution, including:

  • Prompt tokens (user input + system instructions)
  • Example tokens (if provided)
  • Generated response tokens (the AI's output)
  • Useful for monitoring resource usage and optimizing requests for better efficiency.

Step-by-Step Demonstration (Example Section)

A practical walkthrough showing how to use the action in a real-world scenario can greatly help users understand how to configure and use it effectively. This could include:

  • Scenario: A business automates customer support replies using AI.
  • Step 1: Choose the model (e.g., GPT-01 Mini).
  • Step 2: Add system instructions (e.g., “Respond professionally and concisely”).
  • Step 3: Input a prompt (e.g., “How can I reset my password?”).
  • Step 4: Configure generation settings (Temperature, Max Length, etc.).
  • Step 5: Run the action and review the Generated Output and Reasoning Process.

Best Practices Section

Some guidelines and tips to help users get the best results, such as:

  • Use Clear Prompts → More specific inputs yield better responses.
  • Fine-tune Temperature & Top-P → Control creativity vs. determinism.
  • Leverage System Instructions → Ensure consistency in AI-generated content.
  • Monitor Token Usage → Optimize response length to avoid unnecessary costs.