Best Free AI Models of 2026 - Page 8

Use the comparison tool below to compare the top Free AI Models on the market. You can filter results by user reviews, pricing, features, platform, region, support options, integrations, and more.

  • 1
    Devstral Small 2 Reviews
    Devstral Small 2 serves as the streamlined, 24 billion-parameter version of Mistral AI's innovative coding-centric model lineup, released under the flexible Apache 2.0 license to facilitate both local implementations and API interactions. In conjunction with its larger counterpart, Devstral 2, this model introduces "agentic coding" features suitable for environments with limited computational power, boasting a generous 256K-token context window that allows it to comprehend and modify entire codebases effectively. Achieving a score of approximately 68.0% on the standard code-generation evaluation known as SWE-Bench Verified, Devstral Small 2 stands out among open-weight models that are significantly larger. Its compact size and efficient architecture enable it to operate on a single GPU or even in CPU-only configurations, making it an ideal choice for developers, small teams, or enthusiasts lacking access to expansive data-center resources. Furthermore, despite its smaller size, Devstral Small 2 successfully maintains essential functionalities of its larger variants, such as the ability to reason through multiple files and manage dependencies effectively, ensuring that users can still benefit from robust coding assistance. This blend of efficiency and performance makes it a valuable tool in the coding community.
  • 2
    DeepCoder Reviews

    DeepCoder

    Agentica Project

    Free
    DeepCoder, an entirely open-source model for code reasoning and generation, has been developed through a partnership between Agentica Project and Together AI. Leveraging the foundation of DeepSeek-R1-Distilled-Qwen-14B, it has undergone fine-tuning via distributed reinforcement learning, achieving a notable accuracy of 60.6% on LiveCodeBench, which marks an 8% enhancement over its predecessor. This level of performance rivals that of proprietary models like o3-mini (2025-01-031 Low) and o1, all while operating with only 14 billion parameters. The training process spanned 2.5 weeks on 32 H100 GPUs, utilizing a carefully curated dataset of approximately 24,000 coding challenges sourced from validated platforms, including TACO-Verified, PrimeIntellect SYNTHETIC-1, and submissions to LiveCodeBench. Each problem mandated a legitimate solution along with a minimum of five unit tests to guarantee reliability during reinforcement learning training. Furthermore, to effectively manage long-range context, DeepCoder incorporates strategies such as iterative context lengthening and overlong filtering, ensuring it remains adept at handling complex coding tasks. This innovative approach allows DeepCoder to maintain high standards of accuracy and reliability in its code generation capabilities.
  • 3
    DeepSWE Reviews

    DeepSWE

    Agentica Project

    Free
    DeepSWE is an innovative and fully open-source coding agent that utilizes the Qwen3-32B foundation model, trained solely through reinforcement learning (RL) without any supervised fine-tuning or reliance on proprietary model distillation. Created with rLLM, which is Agentica’s open-source RL framework for language-based agents, DeepSWE operates as a functional agent within a simulated development environment facilitated by the R2E-Gym framework. This allows it to leverage a variety of tools, including a file editor, search capabilities, shell execution, and submission features, enabling the agent to efficiently navigate codebases, modify multiple files, compile code, run tests, and iteratively create patches or complete complex engineering tasks. Beyond simple code generation, DeepSWE showcases advanced emergent behaviors; when faced with bugs or new feature requests, it thoughtfully reasons through edge cases, searches for existing tests within the codebase, suggests patches, develops additional tests to prevent regressions, and adapts its cognitive approach based on the task at hand. This flexibility and capability make DeepSWE a powerful tool in the realm of software development.
  • 4
    DeepScaleR Reviews

    DeepScaleR

    Agentica Project

    Free
    DeepScaleR is a sophisticated language model comprising 1.5 billion parameters, refined from DeepSeek-R1-Distilled-Qwen-1.5B through the use of distributed reinforcement learning combined with an innovative strategy that incrementally expands its context window from 8,000 to 24,000 tokens during the training process. This model was developed using approximately 40,000 meticulously selected mathematical problems sourced from high-level competition datasets, including AIME (1984–2023), AMC (pre-2023), Omni-MATH, and STILL. Achieving an impressive 43.1% accuracy on the AIME 2024 exam, DeepScaleR demonstrates a significant enhancement of around 14.3 percentage points compared to its base model, and it even outperforms the proprietary O1-Preview model, which is considerably larger. Additionally, it excels on a variety of mathematical benchmarks such as MATH-500, AMC 2023, Minerva Math, and OlympiadBench, indicating that smaller, optimized models fine-tuned with reinforcement learning can rival or surpass the capabilities of larger models in complex reasoning tasks. This advancement underscores the potential of efficient modeling approaches in the realm of mathematical problem-solving.
  • 5
    GLM-4.6V Reviews
    The GLM-4.6V is an advanced, open-source multimodal vision-language model that belongs to the Z.ai (GLM-V) family, specifically engineered for tasks involving reasoning, perception, and action. It is available in two configurations: a comprehensive version with 106 billion parameters suitable for cloud environments or high-performance computing clusters, and a streamlined “Flash” variant featuring 9 billion parameters, which is tailored for local implementation or scenarios requiring low latency. With a remarkable native context window that accommodates up to 128,000 tokens during its training phase, GLM-4.6V can effectively manage extensive documents or multimodal data inputs. One of its standout features is the built-in Function Calling capability, allowing the model to accept various forms of visual media — such as images, screenshots, and documents — as inputs directly, eliminating the need for manual text conversion. This functionality not only facilitates reasoning about the visual content but also enables the model to initiate tool calls, effectively merging visual perception with actionable results. The versatility of GLM-4.6V opens the door to a wide array of applications, including the generation of interleaved image-and-text content, which can seamlessly integrate document comprehension with text summarization or the creation of responses that include image annotations, thereby greatly enhancing user interaction and output quality.
  • 6
    GLM-4.1V Reviews
    GLM-4.1V is an advanced vision-language model that offers a robust and streamlined multimodal capability for reasoning and understanding across various forms of media, including images, text, and documents. The 9-billion-parameter version, known as GLM-4.1V-9B-Thinking, is developed on the foundation of GLM-4-9B and has been improved through a unique training approach that employs Reinforcement Learning with Curriculum Sampling (RLCS). This model accommodates a context window of 64k tokens and can process high-resolution inputs, supporting images up to 4K resolution with any aspect ratio, which allows it to tackle intricate tasks such as optical character recognition, image captioning, chart and document parsing, video analysis, scene comprehension, and GUI-agent workflows, including the interpretation of screenshots and recognition of UI elements. In benchmark tests conducted at the 10 B-parameter scale, GLM-4.1V-9B-Thinking demonstrated exceptional capabilities, achieving the highest performance on 23 out of 28 evaluated tasks. Its advancements signify a substantial leap forward in the integration of visual and textual data, setting a new standard for multimodal models in various applications.
  • 7
    GLM-4.5V-Flash Reviews
    GLM-4.5V-Flash is a vision-language model that is open source and specifically crafted to integrate robust multimodal functionalities into a compact and easily deployable framework. It accommodates various types of inputs including images, videos, documents, and graphical user interfaces, facilitating a range of tasks such as understanding scenes, parsing charts and documents, reading screens, and analyzing multiple images. In contrast to its larger counterparts, GLM-4.5V-Flash maintains a smaller footprint while still embodying essential visual language model features such as visual reasoning, video comprehension, handling GUI tasks, and parsing complex documents. This model can be utilized within “GUI agent” workflows, allowing it to interpret screenshots or desktop captures, identify icons or UI components, and assist with both automated desktop and web tasks. While it may not achieve the performance enhancements seen in the largest models, GLM-4.5V-Flash is highly adaptable for practical multimodal applications where efficiency, reduced resource requirements, and extensive modality support are key considerations. Its design ensures that users can harness powerful functionalities without sacrificing speed or accessibility.
  • 8
    GLM-4.5V Reviews
    GLM-4.5V is an evolution of the GLM-4.5-Air model, incorporating a Mixture-of-Experts (MoE) framework that boasts a remarkable total of 106 billion parameters, with 12 billion specifically dedicated to activation. This model stands out by delivering top-tier performance among open-source vision-language models (VLMs) of comparable scale, demonstrating exceptional capabilities across 42 public benchmarks in diverse contexts such as images, videos, documents, and GUI interactions. It offers an extensive array of multimodal functionalities, encompassing image reasoning tasks like scene understanding, spatial recognition, and multi-image analysis, alongside video comprehension tasks that include segmentation and event recognition. Furthermore, it excels in parsing complex charts and lengthy documents, facilitating GUI-agent workflows through tasks like screen reading and desktop automation, while also providing accurate visual grounding by locating objects and generating bounding boxes. Additionally, the introduction of a "Thinking Mode" switch enhances user experience by allowing the selection of either rapid responses or more thoughtful reasoning based on the situation at hand. This innovative feature makes GLM-4.5V not only versatile but also adaptable to various user needs.
  • 9
    SAM Audio Reviews
    SAM Audio represents a cutting-edge advancement in AI technology aimed at precise audio segmentation and editing. This innovative tool empowers users to separate individual sounds from intricate audio compositions by utilizing intuitive prompts that reflect natural thought processes regarding sound. Users can easily input descriptive phrases like “eliminate dog barking” or “retain only the vocals,” interact with objects in a video to extract their corresponding audio, or highlight specific time intervals where desired sounds are present, all within a cohesive platform. Accessible through Meta’s Segment Anything Playground, SAM Audio allows users to upload their own audio or video files to immediately explore its features. Additionally, it can be downloaded for implementation in personalized audio projects and research endeavors. Unlike conventional audio editing tools that are limited to specific tasks, SAM Audio excels in accommodating a variety of prompts and accurately handling diverse real-world soundscapes, making it a versatile choice for audio manipulation. This level of flexibility and user-friendliness sets it apart from traditional solutions in the industry.
  • 10
    MiniMax-M2.1 Reviews
    MiniMax-M2.1 is a state-of-the-art open-source AI model built specifically for agent-based development and real-world automation. It focuses on delivering strong performance in coding, tool calling, and long-term task execution. Unlike closed models, MiniMax-M2.1 is fully transparent and can be deployed locally or integrated through APIs. The model excels in multilingual software engineering tasks and complex workflow automation. It demonstrates strong generalization across different agent frameworks and development environments. MiniMax-M2.1 supports advanced use cases such as autonomous coding, application building, and office task automation. Benchmarks show significant improvements over previous MiniMax versions. The model balances high reasoning ability with stability and control. Developers can fine-tune or extend it for specialized agent workflows. MiniMax-M2.1 empowers teams to build reliable AI agents without vendor lock-in.
  • 11
    MiMo-V2-Flash Reviews

    MiMo-V2-Flash

    Xiaomi Technology

    Free
    MiMo-V2-Flash is a large language model created by Xiaomi that utilizes a Mixture-of-Experts (MoE) framework, combining remarkable performance with efficient inference capabilities. With a total of 309 billion parameters, it activates just 15 billion parameters during each inference, allowing it to effectively balance reasoning quality and computational efficiency. This model is well-suited for handling lengthy contexts, making it ideal for tasks such as long-document comprehension, code generation, and multi-step workflows. Its hybrid attention mechanism integrates both sliding-window and global attention layers, which helps to minimize memory consumption while preserving the ability to understand long-range dependencies. Additionally, the Multi-Token Prediction (MTP) design enhances inference speed by enabling the simultaneous processing of batches of tokens. MiMo-V2-Flash boasts impressive generation rates of up to approximately 150 tokens per second and is specifically optimized for applications that demand continuous reasoning and multi-turn interactions. The innovative architecture of this model reflects a significant advancement in the field of language processing.
  • 12
    Xiaomi MiMo Reviews

    Xiaomi MiMo

    Xiaomi Technology

    Free
    The Xiaomi MiMo API open platform serves as a developer-centric interface that allows for the integration and access of Xiaomi’s MiMo AI model family, which includes various reasoning and language models like MiMo-V2-Flash, enabling the creation of applications and services via standardized APIs and cloud endpoints. This platform empowers developers to incorporate AI-driven functionalities such as conversational agents, reasoning processes, code assistance, and search-enhanced tasks without the need to handle the complexities of model infrastructure. It features RESTful API access complete with authentication, request signing, and well-structured responses, allowing software to send user queries and receive generated text or processed results in a programmatic manner. The platform also supports essential operations including text generation, prompt management, and model inference, facilitating seamless interactions with MiMo models. Furthermore, it provides comprehensive documentation and onboarding resources, enabling teams to effectively integrate the latest open-source large language models from Xiaomi, which utilize innovative Mixture-of-Experts (MoE) architectures to enhance performance and efficiency. Overall, this open platform significantly lowers the barriers for developers looking to harness advanced AI capabilities in their projects.
  • 13
    HunyuanWorld Reviews
    HunyuanWorld-1.0 is an open-source AI framework and generative model created by Tencent Hunyuan, designed to generate immersive, interactive 3D environments from text inputs or images by merging the advantages of both 2D and 3D generation methods into a single cohesive process. Central to the framework is a semantically layered 3D mesh representation that utilizes 360° panoramic world proxies to break down and rebuild scenes with geometric fidelity and semantic understanding, allowing for the generation of varied and coherent spaces that users can navigate and engage with. In contrast to conventional 3D generation techniques that often face challenges related to limited diversity or ineffective data representations, HunyuanWorld-1.0 adeptly combines panoramic proxy creation, hierarchical 3D reconstruction, and semantic layering to achieve a synthesis of high visual quality and structural soundness, while also providing exportable meshes that fit seamlessly into standard graphics workflows. This innovative approach not only enhances the realism of generated environments but also opens new possibilities for creative applications in various industries.
  • 14
    Hailuo 2.3 Reviews
    Hailuo 2.3 represents a state-of-the-art AI video creation model accessible via the Hailuo AI platform, enabling users to effortlessly produce short videos from text descriptions or still images, featuring seamless motion, authentic expressions, and a polished cinematic finish. This model facilitates multi-modal workflows, allowing users to either narrate a scene in straightforward language or upload a reference image, subsequently generating vibrant and fluid video content within seconds. It adeptly handles intricate movements like dynamic dance routines and realistic facial micro-expressions, showcasing enhanced visual consistency compared to previous iterations. Furthermore, Hailuo 2.3 improves stylistic reliability for both anime and artistic visuals, elevating realism in movement and facial expressions while ensuring consistent lighting and motion throughout each clip. A Fast mode variant is also available, designed for quicker processing and reduced costs without compromising on quality, making it particularly well-suited for addressing typical challenges encountered in ecommerce and marketing materials. This advancement opens up new possibilities for creative expression and efficiency in video production.
  • 15
    TranslateGemma Reviews
    TranslateGemma is an innovative collection of open machine translation models created by Google, based on the Gemma 3 architecture, which facilitates communication between individuals and systems in 55 languages by providing high-quality AI translations while ensuring efficiency and wide deployment options. Offered in sizes of 4 B, 12 B, and 27 B parameters, TranslateGemma encapsulates sophisticated multilingual functionalities into streamlined models that are capable of functioning on mobile devices, consumer laptops, local systems, or cloud infrastructure, all without compromising on precision or performance; assessments indicate that the 12 B variant can exceed the capabilities of larger baseline models while requiring less computational power. The development of these models involved a distinct two-phase fine-tuning approach that integrates high-quality human and synthetic translation data, using reinforcement learning to enhance translation accuracy across a variety of language families. This innovative methodology ensures that users benefit from an array of languages while experiencing swift and reliable translations.
  • 16
    GLM-4.7-Flash Reviews
    GLM-4.7 Flash serves as a streamlined version of Z.ai's premier large language model, GLM-4.7, which excels in advanced coding, logical reasoning, and executing multi-step tasks with exceptional agentic capabilities and an extensive context window. This model, rooted in a mixture of experts (MoE) architecture, is fine-tuned for efficient inference, striking a balance between high performance and optimized resource utilization, thus making it suitable for deployment on local systems that require only moderate memory while still showcasing advanced reasoning, programming, and agent-like task handling. Building upon the advancements of its predecessor, GLM-4.7 brings forth enhanced capabilities in programming, reliable multi-step reasoning, context retention throughout interactions, and superior workflows for tool usage, while also accommodating lengthy context inputs, with support for up to approximately 200,000 tokens. The Flash variant successfully maintains many of these features within a more compact design, achieving competitive results on benchmarks for coding and reasoning tasks among similarly-sized models. Ultimately, this makes GLM-4.7 Flash an appealing choice for users seeking powerful language processing capabilities without the need for extensive computational resources.
  • 17
    LFM2.5 Reviews

    LFM2.5

    Liquid AI

    Free
    Liquid AI's LFM2.5 represents an advanced iteration of on-device AI foundation models, engineered to provide high-efficiency and performance for AI inference on edge devices like smartphones, laptops, vehicles, IoT systems, and embedded hardware without the need for cloud computing resources. This new version builds upon the earlier LFM2 framework by greatly enhancing the scale of pretraining and the stages of reinforcement learning, resulting in a suite of hybrid models that boast around 1.2 billion parameters while effectively balancing instruction adherence, reasoning skills, and multimodal functionalities for practical applications. The LFM2.5 series comprises various models including Base (for fine-tuning and personalization), Instruct (designed for general-purpose instruction), Japanese-optimized, Vision-Language, and Audio-Language variants, all meticulously crafted for rapid on-device inference even with stringent memory limitations. These models are also made available as open-weight options, facilitating deployment through platforms such as llama.cpp, MLX, vLLM, and ONNX, thus ensuring versatility for developers. With these enhancements, LFM2.5 positions itself as a robust solution for diverse AI-driven tasks in real-world environments.
  • 18
    Qwen3-TTS Reviews
    Qwen3-TTS represents an innovative collection of advanced text-to-speech models created by the Qwen team at Alibaba Cloud, released under the Apache-2.0 license, which delivers stable, expressive, and real-time speech output with functionalities like voice cloning, voice design, and precise control over prosody and acoustic features. This suite supports ten prominent languages—Chinese, English, Japanese, Korean, German, French, Russian, Portuguese, Spanish, and Italian—along with various dialect-specific voice profiles, enabling adaptive management of tone, speech rate, and emotional delivery tailored to text semantics and user instructions. The architecture of Qwen3-TTS incorporates efficient tokenization and a dual-track design, facilitating ultra-low-latency streaming synthesis, with the first audio packet generated in approximately 97 milliseconds, making it ideal for interactive and real-time applications. Additionally, the range of models available offers diverse capabilities, such as rapid three-second voice cloning, customization of voice timbres, and voice design based on given instructions, ensuring versatility for users in many different scenarios. This flexibility in design and performance highlights the model's potential for a wide array of applications in both commercial and personal contexts.
  • 19
    Composer 1 Reviews

    Composer 1

    Cursor

    $20 per month
    Composer is an AI model crafted by Cursor, specifically tailored for software engineering functions, and it offers rapid, interactive coding support within the Cursor IDE, an enhanced version of a VS Code-based editor that incorporates smart automation features. This model employs a mixture-of-experts approach and utilizes reinforcement learning (RL) to tackle real-world coding challenges found in extensive codebases, enabling it to deliver swift, contextually aware responses ranging from code modifications and planning to insights that grasp project frameworks, tools, and conventions, achieving generation speeds approximately four times faster than its contemporaries in performance assessments. Designed with a focus on development processes, Composer utilizes long-context comprehension, semantic search capabilities, and restricted tool access (such as file editing and terminal interactions) to effectively address intricate engineering inquiries with practical and efficient solutions. Its unique architecture allows it to adapt to various programming environments, ensuring that users receive tailored assistance suited to their specific coding needs.
  • 20
    Ray3.14 Reviews

    Ray3.14

    Luma AI

    $7.99 per month
    Ray3.14 represents the pinnacle of Luma AI’s generative video technology, engineered to produce high-caliber, ready-for-broadcast video at a native resolution of 1080p, while also enhancing speed, efficiency, and reliability. This model is capable of generating video content up to four times faster than its predecessor and does so at approximately one-third of the cost, ensuring superior alignment with user prompts and enhanced motion consistency throughout frames. It inherently accommodates 1080p resolution in essential processes like text-to-video, image-to-video, and video-to-video, removing the necessity for post-production upscaling, thereby making the outputs immediately viable for broadcast, streaming, and digital platforms. Furthermore, Ray3.14 significantly boosts temporal motion accuracy and visual stability, particularly beneficial for animations and intricate scenes, as it effectively resolves issues such as flickering and drift, thus allowing creative teams to quickly adapt and iterate within tight production schedules. In essence, it builds upon the reasoning-driven video generation capabilities introduced by the earlier Ray3 model, pushing the boundaries of what generative video can achieve. This advancement in technology not only streamlines the creative process but also paves the way for innovative storytelling techniques in the digital landscape.
  • 21
    Z-Image Reviews
    Z-Image is a family of open-source image generation foundation models created by Alibaba's Tongyi-MAI team, utilizing a Scalable Single-Stream Diffusion Transformer architecture to produce both photorealistic and imaginative images from textual descriptions with only 6 billion parameters, which enhances its efficiency compared to many larger models while maintaining competitive quality and responsiveness to instructions. This model family comprises several variants, including Z-Image-Turbo, a distilled version designed for rapid inference that achieves results with as few as eight function evaluations and sub-second generation times on compatible GPUs; Z-Image, the comprehensive foundation model tailored for high-fidelity creative outputs and fine-tuning processes; Z-Image-Omni-Base, a flexible base checkpoint aimed at fostering community-driven advancements; and Z-Image-Edit, specifically optimized for image-to-image editing tasks while demonstrating strong adherence to instructions. Each variant of Z-Image serves distinct purposes, catering to a wide range of user needs within the realm of image generation.
  • 22
    Step 3.5 Flash Reviews
    Step 3.5 Flash is a cutting-edge open-source foundational language model designed for advanced reasoning and agent-like capabilities, optimized for efficiency; it utilizes a sparse Mixture of Experts (MoE) architecture that activates only approximately 11 billion of its nearly 196 billion parameters per token, ensuring high-density intelligence and quick responsiveness. The model features a 3-way Multi-Token Prediction (MTP-3) mechanism that allows it to generate hundreds of tokens per second, facilitating complex multi-step reasoning and task execution while efficiently managing long contexts through a hybrid sliding window attention method that minimizes computational demands across extensive datasets or codebases. Its performance on reasoning, coding, and agentic tasks is formidable, often matching or surpassing that of much larger proprietary models, and it incorporates a scalable reinforcement learning system that enables continuous self-enhancement. Moreover, this innovative approach positions Step 3.5 Flash as a significant player in the field of AI language models, showcasing its potential to revolutionize various applications.
  • 23
    Qwen3-Coder-Next Reviews
    Qwen3-Coder-Next is a language model with open weights, crafted for coding agents and local development, which excels in advanced coding reasoning, adept tool usage, and effective handling of long-term programming challenges with remarkable efficiency, utilizing a mixture-of-experts framework that harmonizes robust capabilities with a resource-efficient approach. This model enhances the coding prowess of software developers, AI system architects, and automated coding processes, allowing them to generate, debug, and comprehend code with a profound contextual grasp while adeptly recovering from execution errors, rendering it ideal for autonomous coding agents and applications focused on development. Furthermore, Qwen3-Coder-Next achieves impressive performance on par with larger parameter models, but does so while consuming fewer active parameters, thus facilitating economical deployment for intricate and evolving programming tasks in both research and production settings, ultimately contributing to a more streamlined development process.
  • 24
    GLM-OCR Reviews
    GLM-OCR is an advanced multimodal optical character recognition system and an open-source framework that excels in delivering precise, efficient, and thorough document comprehension by integrating textual and visual elements within a cohesive encoder-decoder design inspired by the GLM-V series. This model features a visual encoder that has been pre-trained on extensive image-text datasets alongside a streamlined cross-modal connector that channels information into a GLM-0.5B language decoder. It offers capabilities for layout detection, simultaneous recognition of various regions, and structured outputs for diverse content types, including text, tables, formulas, and intricate real-world document formats. Furthermore, it employs Multi-Token Prediction (MTP) loss and robust full-task reinforcement learning techniques to enhance training efficiency, boost recognition accuracy, and improve generalization across various tasks, leading to remarkable performance on significant document understanding challenges. This innovative approach not only sets new benchmarks but also opens up possibilities for further advancements in the field of document analysis.
  • 25
    RoBERTa Reviews
    RoBERTa enhances the language masking approach established by BERT, where the model is designed to predict segments of text that have been deliberately concealed within unannotated language samples. Developed using PyTorch, RoBERTa makes significant adjustments to BERT's key hyperparameters, such as eliminating the next-sentence prediction task and utilizing larger mini-batches along with elevated learning rates. These modifications enable RoBERTa to excel in the masked language modeling task more effectively than BERT, resulting in superior performance in various downstream applications. Furthermore, we examine the benefits of training RoBERTa on a substantially larger dataset over an extended duration compared to BERT, incorporating both existing unannotated NLP datasets and CC-News, a new collection sourced from publicly available news articles. This comprehensive approach allows for a more robust and nuanced understanding of language.