DigitalAsocial's picture
Update README.md
da6b787 verified
metadata
tags:
  - Data-Science
  - Machine-Learning
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - dense
  - generated_from_trainer
  - dataset_size:167112
  - loss:MultipleNegativesRankingLoss
widget:
  - source_sentence: >-
      We can actually think of finding the best discriminant function as the
      task of finding the best distance function.
    sentences:
      - This is another example of bias/variance dilemma.
      - This process is repeated until only C K remains.
      - >-
        A central issue here is to estimate an m-dimensional random vector θ,
        using optimal sequential selection of observations, which are based on
        feedback from preceding observations; see Fig.
  - source_sentence: This remains true even if we introduce discounting.
    sentences:
      - >-
        Since the diffusion model is stochastic by nature, it's possible to
        generate multiple images that are conditioned on the same caption.
      - The investor wants to maximize the expected value of the sale.
      - >-
        Thus, we can define the return, in general, according to (3.2), using
        the convention of omitting episode numbers when they are not needed, and
        including the possibility that γ = 1 if the sum remains defined (e.g.,
        because all episodes terminate).
  - source_sentence: >-
      With DCGAN, Radford and his collaborators introduced techniques and
      optimizations that allowed ConvNets to scale up to the full GAN framework
      without the need to modify the underlying GAN architecture and without
      reducing GAN to a subroutine of a more complex model framework, like
      LAPGAN.
    sentences:
      - >-
        We can define a discriminant function as


        We can write the discriminant as the product of the likelihood ratio and
        the ratio of priors:


        If the priors are equal, the discriminant is the likelihood ratio.
      - >-
        For the generator, the attribute c can be appended to the latent vector
        z.
      - >-
        Let's take a closer look at what batch normalization is and how it
        works.
  - source_sentence: Suppose ||u|| = 1, so that u is a unit vector.
    sentences:
      - >-
        Importantly, for semi-supervised learning to work, the labeled and
        unlabeled data must come from the same underlying distribution.
      - >-
        For example, in the context 'strings_of_ch', one might predict the next
        nine symbols to be 'aracters_' with a probability of 0.99 each.
      - >-
        We can think of any other vector v as consisting of two components: (a)
        a component in the direction of u and (b) a component that's
        perpendicular to u.
  - source_sentence: |-
      In doing this, there are three decisions we must make:

      1.
    sentences:
      - >-
        g(•) defines the hypothesis class H , and a particular value of θ
        instantiates one hypothesis h ∈ H .
      - >-
        Although replacing traces (Section 7.8) are known to have advantages in
        tabular methods, replacing traces do not directly extend to the use of
        function approximation.
      - >-
        Samples are also used in tests of various sorts (e.g., pricing, web
        treatments).
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - pearson_cosine
  - spearman_cosine
model-index:
  - name: SentenceTransformer
    results:
      - task:
          type: semantic-similarity
          name: Semantic Similarity
        dataset:
          name: val
          type: val
        metrics:
          - type: pearson_cosine
            value: null
            name: Pearson Cosine
          - type: spearman_cosine
            value: null
            name: Spearman Cosine
license: apache-2.0
language:
  - en
base_model:
  - sentence-transformers/all-mpnet-base-v2
datasets:
  - DigitalAsocial/ds-tb-17-g

SentenceTransformer

This is a sentence-transformers model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 384 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 384, 'do_lower_case': False, 'architecture': 'MPNetModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'In doing this, there are three decisions we must make:\n\n1.',
    'g(•) defines the hypothesis class H , and a particular value of θ instantiates one hypothesis h ∈ H .',
    'Although replacing traces (Section 7.8) are known to have advantages in tabular methods, replacing traces do not directly extend to the use of function approximation.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.3814, 0.1328],
#         [0.3814, 1.0000, 0.1478],
#         [0.1328, 0.1478, 1.0000]])

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine nan
spearman_cosine nan

Training Details

Training Dataset

Training Data

The model was fine-tuned using 17 reference books in Data Science and Machine Learning, including: All source books were preprocessed using GROBID, an open-source tool for extracting and structuring text from PDF documents.
The raw PDF files were converted into structured text, segmented into sentences, and cleaned before being used for training.
This ensured consistent formatting and reliable sentence boundaries across the dataset.

  1. Aßenmacher, Matthias. Multimodal Deep Learning. Self-published, 2023.
  2. Bertsekas, Dimitri P. A Course in Reinforcement Learning. Arizona State University.
  3. Boykis, Vicki. What are Embeddings. Self-published, 2023.
  4. Bruce, Peter, and Andrew Bruce. Practical Statistics for Data Scientists: 50 Essential Concepts. O’Reilly Media, 2017.
  5. Daumé III, Hal. A Course in Machine Learning. Self-published.
  6. Deisenroth, Marc Peter, A. Aldo Faisal, and Cheng Soon Ong. Mathematics for Machine Learning. Cambridge University Press, 2020.
  7. Devlin, Hannah, Guo Kunin, Xiang Tian. Seeing Theory. Self-published.
  8. Gutmann, Michael U. Pen & Paper: Exercises in Machine Learning. Self-published.
  9. Jung, Alexander. Machine Learning: The Basics. Springer, 2022.
  10. Langr, Jakub, and Vladimir Bok. Deep Learning with Generative Adversarial Networks. Manning Publications, 2019.
  11. MacKay, David J.C. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003.
  12. Montgomery, Douglas C., Cheryl L. Jennings, and Murat Kulahci. Introduction to Time Series Analysis and Forecasting. 2nd Edition, Wiley, 2015.
  13. Nilsson, Nils J. Introduction to Machine Learning: An Early Draft of a Proposed Textbook. Stanford University, 1996.
  14. Prince, Simon J.D. Understanding Deep Learning. Draft Edition, 2024.
  15. Shashua, Amnon. Introduction to Machine Learning. The Hebrew University of Jerusalem, 2008.
  16. Sutton, Richard S., and Andrew G. Barto. Reinforcement Learning: An Introduction. 2nd Edition, MIT Press, 2018.
  17. Alpaydin, Ethem. Introduction to Machine Learning. 3rd Edition, MIT Press, 2014.

⚠️ Note: Due to copyright restrictions, the full text of these books is not included in this repository. Only the fine-tuned model weights are shared.

Unnamed Dataset

  • Size: 167,112 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 7 tokens
    • mean: 31.74 tokens
    • max: 384 tokens
    • min: 8 tokens
    • mean: 33.03 tokens
    • max: 384 tokens
  • Samples:
    sentence_0 sentence_1
    The weights w are not given but they can be estimated using the training set of X which we can divide as [X, r]. As we see in equation 14.14, what we are effectively doing is estimating the posterior p(w|X, r) and then integrating over it.
    These methodologies are now mature and provide † A common description is that "the machine learns sequentially how to make decisions that maximize a reward signal, based on the feedback received from the environment." At the same time, RL and machine learning have ushered opportunities for the application of DP techniques in new domains, such as machine translation, image recognition, knowledge representation, database organization, large language models, and automated planning, where they can have a significant practical impact.
    Using Lagrange multipliers (Section 7.2), we will derive the dual optimization problem of the SVM in Section 12.3. We subtract the value of ξ n from the margin, constraining ξ n to be non-negative.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 6
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 6
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss val_spearman_cosine
0.0479 500 1.5242 -
0.0957 1000 1.3208 -
0.1436 1500 1.2051 -
0.1915 2000 1.1532 -
0.2393 2500 1.0887 -
0.2872 3000 1.0238 -
0.3351 3500 0.9987 -
0.3830 4000 0.9498 -
0.4308 4500 0.9354 -
0.4787 5000 0.887 -
0.5266 5500 0.8547 -
0.5744 6000 0.8418 -
0.6223 6500 0.7828 -
0.6702 7000 0.7804 -
0.7180 7500 0.7495 -
0.7659 8000 0.7238 -
0.8138 8500 0.6807 -
0.8617 9000 0.6566 -
0.9095 9500 0.6528 -
0.9574 10000 0.6258 -
1.0 10445 - nan

Framework Versions

  • Python: 3.11.7
  • Sentence Transformers: 5.1.1
  • Transformers: 4.57.0
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.10.1
  • Datasets: 4.2.0
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

If you use this model, please cite:

@misc{aghakhani2025synergsticrag,
  author       = {Danial Aghakhani Zadeh},
  title        = {Fine-tuned all-mpnet-base-v2 for Data Science RAG},
  year         = {2025},
  publisher    = {Hugging Face},
  howpublished = {\url{https://huggingface.co/DigitalAsocial/all-mpnet-base-v2-ds-rag-17g}}
}

Contact

For questions, feedback, or collaboration requests regarding this dataset/model, please contact: