1. Scope and Relationship to Other Terms
These Model Terms supplement the Nebulons AI Terms of Use and apply when you access or use Nebulons AI models, inference endpoints, agent capabilities, API features, structured output tools, evaluation environments, or other model-driven functionality. If you are using such services under an enterprise or negotiated agreement, those contract terms will control to the extent of any inconsistency.
These Model Terms are intended to address model-specific risks and responsibilities, including output usage, safety controls, operational restrictions, evaluation practices, and protection of Nebulons AI model assets.
2. Access Rights and Permitted Use
Subject to compliance with applicable terms, Nebulons AI grants you a limited, non-exclusive, non-transferable, revocable right to access and use the model services for your internal business, development, research, or authorized customer-facing purposes. The scope of permitted use may be limited by service documentation, usage limits, order forms, account settings, or feature-specific notices.
Your right to use the model services does not include any right to receive model weights, architecture, training materials, source code, safety systems, or internal evaluation methods unless Nebulons AI expressly agrees otherwise in writing.
3. Restrictions on Model Access and Extraction
You may not use the services in a way that attempts to derive, replicate, extract, infer, or re-create model parameters, training data, internal prompts, latent representations, system rules, or other protected aspects of the services. You may not use the services to build or improve competing models through prohibited extraction or misuse.
Without limiting the foregoing, you may not:
- Use automated or adversarial methods to exfiltrate prompts, policies, hidden instructions, weights, embeddings, or safety boundaries.
- Bypass rate limits, safety filters, or product restrictions in order to expand output capability beyond intended service behavior.
- Use output in a way designed to train or fine-tune another model where such use would violate law, contract, service restrictions, or product notices.
- Misrepresent access rights, impersonate another customer, or route traffic through unauthorized shared credentials or pooled accounts.
4. Inputs, Outputs, and Your Responsibilities
You are responsible for the prompts, inputs, files, tools, system instructions, and downstream actions that you or your users send to the model services. You are also responsible for determining whether outputs are appropriate for the intended use and for implementing any human review, policy checks, or domain-specific controls that your use case requires.
Model outputs may vary by prompt design, system state, product configuration, connected tools, or service updates. Outputs may contain errors, omissions, or content that is unsuitable for a specific factual, legal, medical, financial, safety, or operational purpose. You must not treat outputs as guaranteed or rely on them without review where the use case requires higher assurance.
5. High-Impact and Safety-Critical Use
If you use model services in workflows that can affect safety, rights, regulated operations, industrial environments, or material business decisions, you must implement human oversight, validation procedures, escalation paths, and appropriate technical and organizational controls. Model services should be one part of a controlled system, not the sole basis for high-impact action, unless expressly approved in writing for the specific use case.
You remain responsible for designing safeguards, testing performance, handling failure modes, and satisfying legal and operational obligations in your deployment environment.
6. Safety, Monitoring, and Abuse Prevention
Nebulons AI may monitor usage patterns, prompts, request metadata, outputs, and related signals to protect the services, prevent abuse, investigate incidents, enforce policy, and maintain safety and reliability. Monitoring may be performed through automated systems, human review where permitted, or a combination of the two.
We may introduce, adjust, or enforce usage caps, moderation systems, risk controls, allow-lists, deny-lists, access gating, or feature restrictions where reasonably necessary to protect service integrity or respond to safety concerns.
7. Model Updates, Deprecations, and Compatibility
Nebulons AI may update, replace, fine-tune, deprecate, or retire models, endpoints, or model families over time. Such changes may affect quality, latency, output style, supported parameters, tokenization behavior, safety behavior, pricing, or documented capabilities.
You are responsible for testing your implementation when a model version changes and for maintaining sufficient operational resilience to handle output variation, model retirement, or service migration.
8. Third-Party Tools, Connectors, and Model-Adjacent Features
Some model-enabled workflows may call third-party tools, retrieve external data, or depend on external systems. Nebulons AI does not control third-party systems and is not responsible for their availability, accuracy, legality, or separate terms.
If you enable tool use, actions, or external connectors, you are responsible for ensuring that those workflows are properly permissioned, appropriately reviewed, and suitable for the consequences that may result.
9. Evaluation, Benchmarking, and Public Claims
You may internally evaluate the services for ordinary commercial and technical decision-making. However, you may not publish misleading benchmark results, manipulate tests to produce distorted impressions, or publicly present performance claims in a way that misrepresents the capabilities, limitations, or intended use of the services.
If you publish a technical comparison or benchmark involving Nebulons AI services, you must use a fair methodology, accurately describe test conditions, and avoid implying sponsorship, endorsement, or product guarantees.
10. Suspension and Model-Specific Enforcement
Nebulons AI may suspend or restrict access to model services if we reasonably believe that a model service is being used in a way that creates legal, safety, security, abuse, operational, or reputational risk. We may also disable parameters, tools, or endpoints that appear to be misused or that require remediation.
Where reasonably practicable, we may provide notice and an opportunity to remediate. Immediate action may be taken where delay could create harm, compromise service integrity, or impede a lawful investigation.
11. Ownership, Disclaimers, and Liability
Except as otherwise stated in a separate written agreement, Nebulons AI retains all rights in the models, inference systems, safety infrastructure, developer tooling, and service technology used to provide model services. Any rights you may have in your inputs or outputs are subject to applicable law, third-party rights, and the terms governing the relevant service.
All model services are subject to the disclaimers, warranty exclusions, and liability limits set out in the Terms of Use or any controlling enterprise agreement. Those provisions apply fully to model outputs, tool calls, automated actions, and derivative workflows built on the services.
12. Changes and Contact
Nebulons AI may update these Model Terms to reflect changes in model functionality, safety practices, risk controls, legal requirements, or business operations. Updated terms become effective on the date stated in the revised version unless a different timing is required by law or contract.
Questions about these Model Terms may be directed to Nebulons AI through the contact channels made available on our website or through your existing service relationship.