Skip to content

Releases: JohnSnowLabs/spark-nlp

6.1.2

20 Aug 13:05
6.1.2
Compare
Choose a tag to compare

📢 Spark NLP 6.1.2: AutoGGUFReranker and AutoGGUF improvements

We are excited to announce Spark NLP 6.1.2, enhancing AutoGGUF model support and introduces a brand new reranking annotator based on llama.cpp LLMs. This release also brings fixes for AutoGGUFVision model and improvements for CUDA compatibility of AutoGGUF models.

🔥 Highlights

New AutoGGUFReranker annotator for advanced LLM-based reranking in information retrieval and retrieval-augmented generation (RAG) pipelines.

🚀 New Features & Enhancements

Large Language Models (LLMs)

  • AutoGGUFReranker
    A new annotator for reranking candidate results using AutoGGUF-based LLM embeddings. This enables more accurate ranking in retrieval pipelines, benefiting applications such as search, RAG, and question answering. (Link to notebook)

🐛 Bug Fixes

  • Fixed Python initialization errors in AutoGGUFVisionModel.
  • Using save for AutoGGUF models now supports more file protocols.
  • Ensured better GPU support for AutoGGUF annotators on a broader range of CUDA devices.

❤️ Community Support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • JohnSnowLabs official Medium
  • YouTube Spark NLP video tutorials

Installation

Python

pip install spark-nlp==6.1.2

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.2

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.2

Apple Silicon (M1 & M2)

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.2

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.2

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.1.2</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.1.2</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.1.2</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.1.2</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.1.1...6.1.2

6.1.1

05 Aug 12:24
6.1.1
Compare
Choose a tag to compare

📢 Spark NLP 6.1.1: Enhanced LLM Performance and Expanded Data Ingestion Capabilities

We are thrilled to announce Spark NLP 6.1.1, a focused release that delivers significant performance improvements and enhanced functionality for large language models and universal data ingestion. This release continues our commitment to providing state-of-the-art AI capabilities within the native Spark ecosystem, with optimized inference performance and expanded multimodal support.

🔥 Highlights

  • Performance Boost for llama.cpp models: Inference optimizations in AutoGGUFModel and AutoGGUFEmbeddings deliver improvements for large language model workflows on GPU.
  • Multimodal Vision Models Restored: The AutoGGUFVisionModel annotator is back with full functionality and latest SOTA VLMs, enabling sophisticated vision-language processing capabilities.
  • Enhanced Table Processing: New Reader2Table annotator streamlines tabular data extraction from multiple document formats with seamless pipeline integration.
  • Upgraded openVINO backend: We upgraded our openVINO backend to 2025.02 and added hyperthreading configuration options to maximize performance on multi-core systems.

🚀 New Features & Enhancements

Large Language Models (LLMs)

  • Optimized AutoGGUFModel Performance: We improved the inference of llama.cpp models and achieved a 10% performance increase for AutoGGUFModel on GPU.
  • Restored AutoGGUFVisionModel: The multimodal vision model annotator is fully operational again, enabling powerful vision-language processing capabilities. Users can now process images alongside text for comprehensive multimodal AI applications while using the latest SOTA vision-language models.
  • Enhanced Model Compatibility: AutoGGUFModel can now seamlessly load the language model components from pretrained AutoGGUFVisionModel instances, providing greater flexibility in model deployment and usage. (Link to notebook)
  • Robust Model Loading: Pretrained AutoGGUF-based annotators now load despite the inclusion of deprecated parameters, ensuring broader compatibility.
  • Updated Default Models: All AutoGGUF annotators now use more recent and capable pretrained models:
Annotator Default pretrained model
AutoGGUFModel Phi_4_mini_instruct_Q4_K_M_gguf
AutoGGUFEmbeddings Qwen3_Embedding_0.6B_Q8_0_gguf
AutoGGUFVisionModel Qwen2.5_VL_3B_Instruct_Q4_K_M_gguf

Document Ingestion

  • Reader2Table Annotator: This powerful new annotator provides a streamlined interface for extracting and processing tabular data from various document formats (Link to notebook). It offers:
    • Unified API for interacting with Spark NLP readers
    • Enhanced flexibility through reader-specific configurations
    • Improved maintainability and scalability for data loading workflows
    • Support for multiple formats including HTML, Word (.doc/.docx), Excel (.xls/.xlsx), PowerPoint (.ppt/.pptx), Markdown (.md), and CSV (.csv)

Performance Optimizations

  • OpenVINO Upgrade: We upgrade the backend to 2025.02 and added comprehensive hyperthreading configuration options for the OpenVINO backend, enabling users to optimize performance on multi-core systems by fine-tuning thread allocation and CPU utilization.

🐛 Bug Fixes

None

❤️ Community Support

  • Slack: For live discussion with the Spark NLP community and the team.
  • GitHub: Bug reports, feature requests, and contributions.
  • Discussions: Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium: Spark NLP articles.
  • JohnSnowLabs official Medium
  • YouTube: Spark NLP video tutorials.

Installation

Python

pip install spark-nlp==6.1.1

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.1

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.1

Apple Silicon

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.1

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.1

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.1.1</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.1.1</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.1.1</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.1.1</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.1.0...6.1.1

6.1.0

23 Jul 16:10
6.1.0
Compare
Choose a tag to compare

📢 Spark NLP 6.1.0: State-of-the-art LLM Capabilities and Advancing Universal Ingestion

We are excited to announce Spark NLP 6.1.0, another milestone for building scalable, distributed AI pipelines! This major release significantly enhances our capabilities for state-of-the-art multimodal and large language models and universal data ingestion. Upgrade Spark NLP to 6.1.0 to improve both usability and performance across ingestion, inference, and multimodal processing pipelines, all within the native Spark ecosystem.

🔥 Highlights

  • Upgraded llama.cpp Integration: We've updated our llama.cpp backend to tag b5932 which supports inference with the latest generation of LLMs.
  • Unified Document Ingestion with Reader2Doc: Introducing a new annotator that streamlines the process of loading and integrating diverse file formats (PDFs, Word, Excel, PowerPoint, HTML, Text, Email, Markdown) directly into Spark NLP pipelines with a unified and flexible interface.
  • Support for Phi-4: Spark NLP now natively supports the Phi-4 model, allowing users to leverage its advanced reasoning capabilities.

🚀 New Features & Enhancements

Large Language Models (LLMs)

  • llama.cpp Upgrade: Our llama.cpp backend has been upgraded to version b5932. This update enables native inference for the newest LLMs, such as Gemma 3 and Phi-4, ensuring broader model compatibility and improved performance.
    • NOTE: We are still in the process of upgrading our multimodal AutoGGUFVisionModel annotator to the latest backend. This means that this annotator will not be available in this version. As a workaround, please use version 6.0.5 of Spark NLP.
  • Phi-4 Model Support: Spark NLP now integrates the Phi-4 model, an advanced open model trained on a blend of synthetic data, filtered public domain content, and academic Q&A datasets. This integration enables sophisticated reasoning capabilities directly within Spark NLP. (Link to notebook)

Document Ingestion

  • Reader2Doc Annotator: This new annotator provides a simplified, unified interface for integrating various Spark NLP readers. It supports a wide range of formats, including PDFs, plain text, HTML, Word (.doc/.docx), Excel (.xls/.xlsx), PowerPoint (.ppt/.pptx), email files (.eml, .msg), and Markdown (.md).
  • Using this annotator, you can read all these different formats into Spark NLP documents, making them directly accessible in all your Spark NLP pipelines. This significantly reduces boilerplate code and enhances flexibility in data loading workflows, making it easier to scale and switch between data sources.

Let's use a code example to see how easy it is to use:

reader2doc = Reader2Doc() \
    .setContentType("application/pdf") \
    .setContentPath("./pdf-files") \
    .setOutputCol("document")

# other NLP stages in `nlp_stages`

pipeline = Pipeline(stages=[reader2doc] + nlp_stages)
model = pipeline.fit(empty_df)
result_df = model.transform(empty_df)

Check out our full example notebook to see it in action.

🐛 Bug Fixes

  • HuggingFace OpenVINO Notebook for Qwen2VL: Addressed and fixed issues in the notebook related to the OpenVINO conversion of the Qwen2VL model, ensuring smoother functionality.

❤️ Community Support

  • Slack: For live discussion with the Spark NLP community and the team.
  • GitHub: Bug reports, feature requests, and contributions.
  • Discussions: Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium: Spark NLP articles.
  • JohnSnowLabs official Medium
  • YouTube: Spark NLP video tutorials.

Installation

Python

pip install spark-nlp==6.1.0

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.0

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.0

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.0

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.0

Apple Silicon (M1 & M2)

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.0

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.0

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.0

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.0

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.1.0</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.1.0</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.1.0</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.1.0</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.0.5...6.1.0

6.0.5

10 Jul 07:58
6.0.5
Compare
Choose a tag to compare

📢 Spark NLP 6.0.5: Enhanced Microsoft Fabric Integration & Markdown Processing

We're thrilled to announce the release of Spark NLP 6.0.5! This version introduces a new Markdown Reader, enabling direct processing of Markdown files into structured Spark DataFrames for more diverse NLP workflows. We have also enhanced Microsoft Fabric integration, allowing for seamless model downloads from Lakehouse containers.

🔥 Highlights

  • New Markdown Reader: Introduce the new MarkdownReader for effortlessly parsing Markdown files into structured Spark DataFrames, paving the way for advanced content analysis and NLP on Markdown content.
  • Enhanced Microsoft Fabric Support: Download models directly from Microsoft Fabric Lakehouse containers, streamlining your NLP deployments in the Fabric environment.

🚀 New Features & Enhancements

  • New MarkdownReader Annotator: Introducing the MarkdownReader, a powerful new feature that allows you to read and parse Markdown files directly into a structured Spark DataFrame. This enables efficient processing and analysis of Markdown content for various NLP applications. We recommend using this reader automatically in our Partition annotator. (Link to notebook)

    partitioner = Partition(content_type = "text/markdown"").partition(md_directory)
  • Microsoft Fabric Integration: Spark NLP now supports downloading models from Microsoft Fabric Lakehouse containers, providing a more integrated and efficient workflow for users leveraging Microsoft Fabric. This enhancement ensures smoother model access and deployment within the Fabric ecosystem. For example, you can define the path to our pretrained models in Spark like so:

    from pyspark import SparkConf
    
    conf = SparkConf()
    conf.set("spark.jsl.settings.pretrained.cache_folder", "abfss://my_workspace@onelake.dfs.fabric.microsoft.com/lakehouse_folder.Lakehouse/Files/my_models")

🐛 Bug Fixes

We performed crucial maintenance updates to all of our example notebooks, ensuring that they are reproducible and properly displayed in GitHub.

❤️ Community Support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • JohnSnowLabs official Medium
  • YouTube Spark NLP video tutorials

⚙️ Installation

Python

#PyPI
pip install spark-nlp==6.0.5

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.5

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.5

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.5

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.5

Apple Silicon

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.5

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.5

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.5

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.5

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.0.5</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.0.5</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.0.5</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.0.5</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.0.4...6.0.5

6.0.4

30 Jun 10:50
6.0.4
Compare
Choose a tag to compare

📢 Spark NLP 6.0.4: MiniLMEmbeddings, DataFrame Optimization, and Enhanced PDF Processing

We are excited to announce the release of Spark NLP 6.0.4! This version brings advancements in text embeddings with the introduction of the MiniLM family, Spark DataFrame optimizations, and enhanced PDF document parsing. Upgrade to 6.0.4 to leverage these cutting-edge features and expand your NLP capabilities at scale.

Stay updated with our latest examples and tutorials by visiting our Medium - Spark NLP blog!

🔥 Highlights

  • Introducing MiniLMEmbeddings: Support for the efficient and powerful MiniLMEmbeddings models, providing state-of-the-art text representations.
  • New DataFrameOptimizer: A new DataFrameOptimizer transformer to streamline and optimize Spark DataFrame operations, offering configurable repartitioning, caching, and persistence options.
  • Advanced PDF Reader Features: Enhancements to the PDF Reader with extractCoordinates for spatial metadata, normalizeLigatures for improved text consistency, and a new exception column for enhanced fault tolerance.

🚀 New Features & Enhancements

Advanced Text Embeddings

This release introduces a new family of efficient text embedding models:

  • MiniLMEmbeddings: Support for the MiniLMEmbeddings annotator, enabling the use of MiniLM models for generating highly efficient and effective sentence embeddings. These models are designed to provide strong performance while being significantly smaller and faster than larger alternatives, making them ideal for a wide range of NLP tasks requiring compact and powerful text representations. (Link to notebook)

Spark DataFrame Optimization

  • DataFrameOptimizer: Introducing the new DataFrameOptimizer transformer, designed to enhance the performance and manageability of Spark DataFrames within your NLP pipelines. (Link to notebook)
    • Configurable Repartitioning: Allows for automatic repartitioning of DataFrames, ensuring optimal data distribution for downstream processing.
    • Optional Caching: Supports DataFrame caching (doCache) to significantly speed up iterative computations.
    • Persistent Output: Adds robust support for persisting DataFrames to disk in various formats (csv, json, parquet) with custom writer options via outputOptions.
    • Schema Preservation: Efficiently preserves the original DataFrame schema, making it a seamless utility for complex Spark NLP pipelines.

Enhanced PDF Document Processing

The PDF Reader and PdfToText transformer have been significantly improved for more comprehensive and fault-tolerant document parsing. (Link to notebook)

  • Spatial Metadata Extraction (extractCoordinates): A new configurable parameter extractCoordinates in PdfToText and the PDF Reader. When enabled, this outputs detailed spatial metadata (text position and dimensions) for each character in the PDF.
  • Ligature Normalization (normalizeLigatures): When extractCoordinates is enabled, the normalizeLigatures option ensures that ligature characters (e.g., fi, fl, œ) are automatically normalized to their decomposed forms (fi, fl, oe).
  • Fault Tolerance with Exception Column: A new exception output column has been introduced to capture and log any processing errors encountered while handling individual PDF documents.

❤️ Community Support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • JohnSnowLabs official Medium
  • YouTube Spark NLP video tutorials

⚙️ Installation

Python

#PyPI
pip install spark-nlp==6.0.4

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.4

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.4

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.4

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.4

Apple Silicon

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.4

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.4

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.4

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.4

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.0.4</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.0.4</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.0.4</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.0.4</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.0.3...6.0.4

6.0.3

11 Jun 15:23
6.0.3
Compare
Choose a tag to compare

📢 Spark NLP 6.0.3: Multimodal E5-V Embeddings and Enhanced Document Partitioning

We are excited to announce the release of Spark NLP 6.0.3! This version introduces significant advancements in multimodal capabilities and further refines document processing workflows. Upgrade to 6.0.3 to leverage these cutting-edge features and expand your NLP and vision task capabilities at scale.

🔥 Highlights

  • Introducing E5-V Universal Multimodal Embeddings: Support for E5VEmbeddings, enabling universal multimodal embeddings with Multimodal Large Language Models (MLLMs). It can express semantic similarly between texts, images, or a combination of both.
  • Enhanced Document Partitioning: Improvements to the Partition and PartitionTransformer annotators with new character and title-based chunking strategies.
  • New XML Reader: Added sparknlp.read().xml() and integrated XML support into the Partition annotator for streamlined XML document processing.

🚀 New Features & Enhancements

E5-V Multimodal Embeddings

This release further boosts Spark NLP's multimodal processing power with the integration of E5-V.

  • E5VEmbeddings is designed to adapt MLLMs for achieving universal multimodal embeddings. It leverages MLLMs with prompts to effectively bridge the modality gap between different types of inputs, demonstrating strong performance in multimodal embeddings even without fine-tuning. (Link to notebook)

Enhanced Unstructured Document Processing

The Partition and PartitionTransformer components now include additional chunking strategies and enhancements, which divides content into meaningful units based on the document's structure or number of characters.

  • New Chunking Strategies (Link to notebook)
    • Character Number Strategy (maxCharacters): Split documents by number of characters.
    • Title-Based Chunking Strategy (byTitle): Split documents by titles in the documents. Additional settings:
    • Soft Chunking Limit (newAfterNChars): Allows for early section breaks before reaching the maxCharacters threshold.
    • Contextual Overlap (overlapAll): Adds trailing context from the previous chunk to the next, improving semantic continuity.
  • Enhancements
    • Page Boundary Splitting: Respects pageNumber metadata and starts a new section when a page changes.
    • Title Inclusion Behavior: Ensures titles are embedded within the following content rather than forming isolated chunks.
    • New XML Reader: This release introduces a new feature that enables reading and parsing XML files into a structured Spark DataFrame. (Link to notebook)
      • Added sparknlp.read().xml(): This method accepts file paths of XML content.
      • Use in Partition: XML content can now be processed using the Partition annotator by setting content_type = "application/xml".

🐛 Bug Fixes

❤️ Community Support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • JohnSnowLabs official Medium
  • YouTube Spark NLP video tutorials

⚙️ Installation

Python

#PyPI
pip install spark-nlp==6.0.3

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.3

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.3

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.3

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.3

Apple Silicon

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.3

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.3

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.3

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.3

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.0.3</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.0.3</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.0.3</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.0.3</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.0.2...6.0.3

6.0.2

28 May 15:19
6.0.2
Compare
Choose a tag to compare

📢 Spark NLP 6.0.2: Advancing Multimodal Capabilities and Streamlining Document Processing

We are thrilled to announce the release of Spark NLP 6.0.2! This version introduces powerful new multimodal models and significantly enhances document processing workflows. Upgrade to 6.0.2 to leverage these cutting-edge features and expand your NLP and vision task capabilities at scale.

Stay updated with our latest examples and tutorials by visiting our Medium - Spark NLP blog!

🔥 Highlights

  • Introducing InternVL: Support for the state-of-the-art InternVLForMultiModal model, enabling advanced visual question answering with InternVL 2, 2.5, and 3 series models.
  • Introducing Florence-2: Integration of Florence-2 in Florance2Transformer, a sophisticated vision foundation model for diverse prompt-based vision and vision-language tasks like captioning, object detection, and segmentation.
  • New Document Partitioning Feature: Added the Partition and PartitionTransformer annotator for a unified and configurable interface with Spark NLP readers, simplifying unstructured data loading.

🚀 New Features & Enhancements

Advanced Multimodal Model Integrations

This release significantly boosts Spark NLP's multimodal processing power with the integration of two new visual language models:

  • InternVL: InternVLForMultiModal is a powerful multimodal large language model is specifically designed for visual question answering. This annotator is versatile, supporting the InternVL 2, 2.5, and 3 families of models, allowing users to tackle complex visual-linguistic tasks. (Link to notebook)
  • Florence-2: Introducing Florance2Transformer, an advanced vision foundation model. Florence-2 utilizes a prompt-based approach, enabling it to perform a wide array of vision and vision-language tasks. Users can leverage simple text prompts to execute tasks such as image captioning, object detection, and image segmentation with high accuracy. (Link to notebook)

Enhanced Unstructured Document Processing

  • Partitioning Documents: This release introduces the new Partition and PartitionTransformer annotator.
    • Partition provides a unified interface for extracting structured content from various document formats into Spark DataFrames. It supports input from files, URLs, in-memory strings, or byte arrays and handles formats such as text, HTML, Word, Excel, PowerPoint, emails, and PDFs. It automatically selects the appropriate reader based on file extension or MIME type and allows customization via parameters. (Link to notebook)
    • The PartitionTransformer annotator allows you to use the Partition feature more smoothly within existing Spark NLP workflows, enabling seamless reuse of your pipelines. PartitionTransformer can be used for extracting structured content from various document types using Spark NLP readers. It supports reading from files, URLs, in-memory strings, or byte arrays, and returns parsed output as a structured Spark DataFrame. (Link to notebook)
  • Key Improvements:
    • Simplifies integration with Spark NLP readers through a unified interface.
    • Adds flexibility by enabling more reader-specific configurations.
    • Enhances the maintainability and scalability of data loading workflows.

🐛 Bug Fixes

  • Adjusted python type annotations for the AutoGGUFModel (#14576)

❤️ Community Support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • JohnSnowLabs official Medium
  • YouTube Spark NLP video tutorials

⚙️ Installation

Python

#PyPI
pip install spark-nlp==6.0.2

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.2

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.2

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.2

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.2

Apple Silicon

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.2

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.2

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.2

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.2

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.0.2</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.0.2</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.0.2</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.0.2</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.0.1...6.0.2

Spark NLP 6.0.1: SmolVLM, PaliGemma 2, Gemma 3, PDF Reader enhancements

14 May 19:43
6.0.1
Compare
Choose a tag to compare

📢 Spark NLP 6.0.1: Introducing New State-of-the-Art Vision-Language Models and Enhanced Document Processing

We are pleased to announce the release of Spark NLP 6.0.1, bringing exciting new vision features and continued enhancements. Expand your NLP capabilities at scale for a wide range of tasks by upgrading to 6.0.1 and leverage these powerful new additions and improvements!

We also have been adding blog posts covering various examples for our newest features. Check them out at Medium - Spark NLP!

🔥 Highlights

  • Added support for several new State-of-the-Art vision language models (VLM) including Gemma 3, PaliGemma, PaliGemma2, and SmolVLM.
  • Introduced new parameter options for the PDF Reader for enhanced document ingestion control.

🚀 New Features & Enhancements

New VLM Implementations

This release adds support for several cutting-edge VLMs, significantly expanding the range of tasks you can tackle with Spark NLP:

  • Gemma 3: The latest version of Google's lightweight, state-of-the-art open models. (link to notebook)
  • PaliGemma and PaliGemma 2: Integration of the original PaliGemma vision-language model by Gogle. This annotator can also read PaliGemma2 models. (link to notebook)
  • SmolVLM: small, fast, memory-efficient, and fully open-source 2B VLM (link to notebook)

PDF Reader Enhancements

The PDF Reader now includes additional parameters and options, providing users with more flexible and controlled ingestion of PDF documents, improving handling of various PDF structures. (link to notebook)

You can now

  • Add splitPage parameter to identify the correct number of pages
  • Add onlyPageNum parameter to display only the number of pages of the document
  • Add textStripper parameter used for output layout and formatting
  • Add sort parameter to enable or disable sorting lines

🐛 Bug Fixes

This release also includes fixes for several issues:

  • Fixed a python error in RoBERtaMultipleChoice, preventing these types of annotators to be loaded in Python
  • Fixed various typos and issues in our Jupyter notebook examples

❤️ Community Support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • JohnSnowLabs official Medium
  • YouTube Spark NLP video tutorials

⚙️ Installation

Python

#PyPI

pip install spark-nlp==6.0.1

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.1

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.1

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.1

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.1

Apple Silicon

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.1

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.1

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.1

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.1

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.0.1</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.0.1</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.0.1</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.0.1</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.0.0...6.0.1

Spark NLP 6.0.0: PDF Reader, Excel Reader, PowerPoint Reader, Vision Language Models, Native Multimodal in GGUF, and many more!

28 Apr 19:13
3fce83f
Compare
Choose a tag to compare

📢 Spark NLP 6.0.0: A New Era for Universal Ingestion and Multimodal LLM Processing at Scale

From raw documents to multimodal insights at enterprise scale

With Spark NLP 6.0.0, we are setting a new standard for building scalable, distributed AI pipelines. This release transforms Spark NLP from a pure NLP library into the de facto platform for distributed LLM ingestion and multimodal batch processing.

This release introduces native ingestion for enterprise file types including PDFs, Excel spreadsheets, PowerPoint decks, and raw text logs, with automatic structure extraction, semantic segmentation, and metadata preservation — all in scalable, zero-code Spark pipelines.

At the same time, Spark NLP now natively supports Vision-Language Models (VLMs), loading quantized multimodal models like LLAVA, Phi Vision, DeepSeek Janus, and Llama 3.2 Vision directly via Llama.cpp, ONNX, and OpenVINO runtimes with no external inference servers, no API bottlenecks.

With 6.0.0, Spark NLP offers a complete, distributed architecture for universal data ingestion, multimodal understanding, and LLM batch inference at scale — enabling retrieval-augmented generation (RAG), document understanding, compliance audits, enterprise search, and multimodal analytics — all within the native Spark ecosystem.

One unified framework. Text, vision, documents — at Spark scale. Zero boilerplate. Maximum performance.

spark-nlp-loves-vision

🌟 Spotlight Feature: AutoGGUFVisionModel — Native Multimodal Inference with Llama.cpp

Spark NLP 6.0.0 introduces the new AutoGGUFVisionModel, enabling native multimodal inference for quantized GGUF models directly within Spark pipelines. Powered by Llama.cpp, this annotator makes it effortless to run Vision-Language Models (VLMs) like LLAVA-1.5-7B Q4_0, Qwen2 VL, and others fully on-premises, at scale, with no external servers or APIs required.

With Spark NLP 6.0.0, Llama.cpp vision models are now first-class citizens inside DataFrames, delivering multimodal inference at scale with native Spark performance.

Why it matters

For the first time, Spark NLP supports pure vision-text workflows, allowing you to pass raw images and captions directly into LLMs that can describe, summarize, or reason over visual inputs.
This unlocks batch multimodal processing across massive datasets with Spark’s native scalability — perfect for product catalogs, compliance audits, document analysis, and more.

How it works

  • Accepts raw image bytes (not Spark's OpenCV format) for true end-to-end multimodal inference.
  • Provides a convenient helper function ImageAssembler.loadImagesAsBytes to prepare image datasets effortlessly.
  • Supports all Llama.cpp runtime parameters like context length (nCtx), top-k/top-p sampling, temperature, and repeat penalties, allowing fine control over completions.

Example usage

documentAssembler = DocumentAssembler() \
    .setInputCol("caption") \
    .setOutputCol("caption_document")

imageAssembler = ImageAssembler() \
    .setInputCol("image") \
    .setOutputCol("image_assembler")

data = ImageAssembler \
    .loadImagesAsBytes(spark, "src/test/resources/image/") \
    .withColumn("caption", lit("Caption this image."))

model = AutoGGUFVisionModel.pretrained() \
    .setInputCols(["caption_document", "image_assembler"]) \
    .setOutputCol("completions") \
    .setBatchSize(4) \
    .setNPredict(40) \
    .setTopK(40) \
    .setTopP(0.95) \
    .setTemperature(0.05)

pipeline = Pipeline().setStages([documentAssembler, imageAssembler, model])
results = pipeline.fit(data).transform(data)
results.selectExpr("reverse(split(image.origin, '/'))[0] as image_name", "completions.result").show(truncate=False)

📚 A full notebook walkthrough is available here.


🔥 New Features & Enhancements

PDF Reader

Font-aware PDF ingestion is now available with automatic page segmentation, encrypted file support, and token-level coordinate extraction, ideal for legal discovery and document Q&A.

Excel Reader

Spark NLP can now ingest .xls and .xlsx files directly into Spark DataFrames with automatic schema detection, multiple sheet support, and rich-text extraction for LLM pipelines.

PowerPoint Reader

Spark NLP introduces a native reader for .ppt and .pptx files. Capture slides, speaker notes, themes, and alt text at the document level for downstream summarization and retrieval.

Extractor and Cleaner Annotators

New Extractor and Cleaner annotators allow you to pull structured data (emails, IP addresses, dates) from text or clean noisy text artifacts like bullets, dashes, and non-ASCII characters at scale.

Text Reader

A high-performance TextReader is now available to load .txt, .csv, .log and similar files. It automatically detects encoding and line endings for massive ingestion jobs.

AutoGGUFVisionModel for Multimodal Llama.cpp Inference

Spark NLP now supports vision-language models in GGUF format using the new AutoGGUFVisionModel annotator. Run models like LLAVA-1.5-7B Q4_0 or Qwen2 VL entirely within Spark using Llama.cpp, enabling native multimodal batch inference without servers.

DeepSeek Janus Multimodal Model

The DeepSeek Janus model, tuned for instruction-following across text and images, is now fully integrated and available via a simple pretrained call.

Qwen-2 Vision-Language Model Catalog

Support for Alibaba’s Qwen-2 VL series (0.5B to 7B parameters) is now available. Use Qwen-2 checkpoints for OCR, product search, and multimodal retrieval tasks with unified APIs.

Native Multimodal Support with Phi-3.5 Vision

The new Phi3Vision annotator brings Microsoft’s Phi-3.5 multimodal model into Spark NLP. Process images and prompts together to generate grounded captions or visual Q&A results, all with a model footprint of less than 1 GB.

LLAVA 1.5 Vision-Language Transformer

Spark NLP now supports LLAVA 1.5 (7B) natively for screenshot Q&A, chart reading, and UI testing tasks. Build fully distributed multimodal inference pipelines without external services or dependencies.

Native Cohere Command-R Models

Cohere’s multilingual Command-R models (up to 35B parameters) are now fully integrated. Perform reasoning, RAG, and summarization tasks with no REST API latency and no token limits.

OLMo Family Support

Spark NLP now supports the full OLMo suite of open-weight language models (7B, 1.7B, and more) directly in Scala and Python. OLMo models come with full training transparency, Dolma-sized vocabularies, and reproducible experiment logs, making them ideal for academic research and benchmarking.

Multiple-Choice Heads for LLMs

New lightweight multiple-choice heads are now available for ALBERT, DistilBERT, RoBERTa, and XLM-RoBERTa models. These are perfect for building auto-grading systems, educational quizzes, and choice ranking pipelines.

  • AlbertForMultipleChoice
  • DistilBertForMultipleChoice
  • RoBertaForMultipleChoice
  • XlmRoBertaForMultipleChoice

VisionEncoderDecoder Improvements

The Scala API for VisionEncoderDecoder has been fully refactored to expose .generate() parameters like batch size and maximum tokens, aligning it one-to-one with the Python API.


🐛 Bug Fixes

Better GGUF Error Reporting

When a GGUF file is missing tensors or uses unsupported quantization, Spark NLP now provides clear and actionable error messages, including guidance on how to fix or convert the model.

Fixed MXBAI Typo

A small typo related to the MXBAI integration was corrected to ensure consistency across annotator names and pretrained model references.

VisionEncoderDecoder Alignment

The Scala VisionEncoderDecoder wrapper has been updated to fully match the Python API. It now exposes parameters like batch size and maximum tokens, fixing discrepancies that could occur in cross-language pipelines.

Minor Naming Improvements

Variable naming inconsistencies have been cleaned up throughout the codebase to ensure a more uniform and predictable developer experience.


📝 Models

We have added more than 110,000 new models and pipelines. The complete list of all 88,000+ models & pipelines in 230+ languages is available on our Models Hub.


❤️ Community support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas,
    and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • JohnSnowLabs official Medium
  • YouTube Spark NLP video tutorials

Installation

Python

#PyPI

pip install spark-nlp==6.0.0

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.0

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_...
Read more

Spark NLP 5.5.3: Enhanced Embeddings, Fixed Attention Masks, Performance Optimizations, and 100K Free Models

30 Jan 16:15
7d2bed7
Compare
Choose a tag to compare

📢 Spark NLP: Enhanced Embeddings, Fixed Attention Masks, and Performance Optimizations

Introduction

We’re excited to introduce the latest release of Spark NLP 5.5.3, featuring critical enhancements and bug fixes for several of our Text Embeddings annotators. These improvements ensure even more reliable and efficient performance for your NLP workflows.

But that’s not all—we’re also celebrating a major milestone: crossing 100,000 truly free and open models on our Models Hub! This achievement underscores our commitment to making state-of-the-art NLP accessible to everyone, forever.

Upgrade today to take advantage of these enhancements, and thank you for being part of the Spark NLP community. Your support and contributions continue to drive innovation forward!

🔥 Highlights

  • Enhanced BGE Embeddings with configurable pooling strategies
  • Fixed attention mask padding across multiple embedding models
  • Major performance optimizations for transformer models
  • Improved model default configurations and traits

🚀 New Features

Enhanced BGE Embeddings

Previously, BGE embeddings used a fixed pooling strategy that didn't match all model variants, resulting in suboptimal performance for some models (cosine similarity around 0.97 compared to the original implementation). Different BGE models are trained with different pooling strategies - some use CLS token pooling while others use attention-based average pooling.

  • Added new useCLSToken parameter to control embedding pooling strategy
  • Changed default pretrained model from "bge_base" to "bge_small_en_v1.5"
val embeddings = BGEEmbeddings.pretrained("bge_small_en_v1.5")
  .setUseCLSToken(true)  // Use CLS token pooling (default)
  .setInputCols("document")
  .setOutputCol("embeddings")

🛠 Improvements & Bug Fixes

Attention Mask Fixes

Fixed incorrect padding in attention mask calculations for multiple models:

  • MPNet
  • BGE
  • E5
  • Mxbai
  • Nomic
  • SnowFlake
  • UAE

This fix ensures consistent results between native implementations and ONNX versions.

Other Fixes

  • Fixed Llama3 download issues in Python
  • Optimized OpenVINO and ONNX inference paths
  • Enhanced code cleanup and standardization

🔄 Breaking Changes

BGE Embeddings Updates

  1. Default Model Change:

    • Old default: "bge_base"
    • New default: "bge_small_en_v1.5"
    • Action required: Explicitly specify "bge_base" if needed
  2. Pooling Strategy:

    • New useCLSToken parameter defaults to True
    • May affect embedding calculations
    • Action required: Verify existing implementations and set parameter explicitly if needed

💡 Usage Examples

Specifying BGE Model Version

// Using new default
val embeddingsNew = BGEEmbeddings.pretrained()

// Using previous default explicitly
val embeddingsOld = BGEEmbeddings.pretrained("bge_base")

Configuring Pooling Strategy

// Using CLS token pooling
val embeddingsCLS = BGEEmbeddings.pretrained()
  .setUseCLSToken(true)

// Using attention-based average pooling
val embeddingsAvg = BGEEmbeddings.pretrained()
  .setUseCLSToken(false)

❤️ Community support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas,
    and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • JohnSnowLabs official Medium
  • YouTube Spark NLP video tutorials

Installation

Python

#PyPI

pip install spark-nlp==5.5.3

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:5.5.3

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:5.5.3

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:5.5.3

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:5.5.3

Apple Silicon (M1 & M2)

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:5.5.3

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:5.5.3

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:5.5.3

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:5.5.3

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>5.5.3</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>5.5.3</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>5.5.3</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>5.5.3</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 5.5.2...5.5.3