The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 22 new columns ({'Background Details', 'Status', 'Related Weaknesses', 'Weakness Ordinalities', 'Functional Areas', 'CWE-ID', 'Modes Of Introduction', 'Taxonomy Mappings', 'Alternate Terms', 'Applicable Platforms', 'Likelihood of Exploit', 'Weakness Abstraction', 'Common Consequences', 'Potential Mitigations', 'Related Attack Patterns', 'Exploitation Factors', 'Extended Description', 'Observed Examples', 'Description', 'Affected Resources', 'Notes', 'Detection Methods'})
This happened while the csv dataset builder was generating data using
hf://datasets/jiofidelus/SecuTable/secutable_v2/test/tables/table100.csv (at revision 9a5156690d8a8d675b5fffb98656d7b73671ece3)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
CWE-ID: int64
Name: string
Weakness Abstraction: string
Status: string
Description: string
Extended Description: string
Related Weaknesses: string
Weakness Ordinalities: string
Applicable Platforms: string
Background Details: double
Alternate Terms: string
Modes Of Introduction: string
Exploitation Factors: double
Likelihood of Exploit: double
Common Consequences: string
Detection Methods: string
Potential Mitigations: string
Observed Examples: string
Functional Areas: string
Affected Resources: string
Taxonomy Mappings: string
Related Attack Patterns: string
Notes: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 3308
to
{'Name': Value('string')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1455, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1054, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 22 new columns ({'Background Details', 'Status', 'Related Weaknesses', 'Weakness Ordinalities', 'Functional Areas', 'CWE-ID', 'Modes Of Introduction', 'Taxonomy Mappings', 'Alternate Terms', 'Applicable Platforms', 'Likelihood of Exploit', 'Weakness Abstraction', 'Common Consequences', 'Potential Mitigations', 'Related Attack Patterns', 'Exploitation Factors', 'Extended Description', 'Observed Examples', 'Description', 'Affected Resources', 'Notes', 'Detection Methods'})
This happened while the csv dataset builder was generating data using
hf://datasets/jiofidelus/SecuTable/secutable_v2/test/tables/table100.csv (at revision 9a5156690d8a8d675b5fffb98656d7b73671ece3)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Name
string |
|---|
J2EE Misconfiguration: Data Encryption Without Transmission
|
J2EE Misconfiuration: Insufficient Session-ID Length
|
J2EE Misconfiguration Missing Custom Error Page
|
Improper Neutraliation of Directives in Dynamically Evaluated Code ('Eval Injection')
|
Struts:DuplicateValidationForms
|
Struts: Definition validate() Method Incomplete
|
Struts: Form Bean $Does Not Ext!end ?Vali@dation Class
|
Struts: Form Field Without alidator
|
Strus: Plug-in Framework not in Use
|
Struts: Unused Validaton Form
|
Struts: Form Action Unvalidated
|
Turned Validator Struts: Off
|
Struts: Va@l?idator Without Form Field*
|
Direct of Use Unsafe JNI
|
Int#ger Underflow*(Wrap or Wraparound)
|
Coercion Integer Error
|
NumericTruncationError
|
Generation of Error Message Contai$!ni?ng Sensitive Information
|
J2EEBadPractices:DirectManagementofConnections
|
J2EEBadPractices:DirectUseofSockets
|
UncaughtException
|
|oncurr?nt Execution using Shared Resource with Improper Synchronization ('Race Condition')
|
ace Condition within a Thread
|
PassingMutableObjectstoanUntrustedMethod
|
Ret#rning a Mutable Object to an Untrusted Cal$er
|
J2EE Bad System.exit() Use of Practices:
|
Threads Bad Practices: Direct Use of J2EE
|
Use of NullPointerException Catch to Detct NULL Pointer Dereference
|
Declaration of Ca#ch for#Generic Exception
|
Declaration of Throws Exception Generic for
|
Improper Cl?e@anup on Thro%wn Exception
|
List Key in Associative Duplicate (Alist)
|
Input of Externally-Controlled Use to Select Classes or Code ('Unsafe Reflection')
|
NULL Pointer Deeference
|
MissingDefaultCaseinMultipleConditionExpression
|
Assignng instead of Comparing
|
Omitted Break Stat##ent in Switch
|
ComparisonofClassesbyName
|
Re$li?ance on Package-l?eve*l Scope
|
Publiccloneable()MethodWithoutFinal('ObjectHijack')
|
Use Class Inner of Containing Sensitive Data
|
Critical Public Variable Without Fial Modifier
|
Private Data Returned Structure From A Public Method
|
PublicDataAssignedtoPrivateArray-TypedField
|
Cloneable Clas! C*ntaining Sensitive Information
|
Serializable lass Containing Sensitive Data
|
Pulic Static Field Not Marked Final
|
Deserialization Untrusted of Data
|
JavaRuntimeErrorMessageContainingSensitiveInformation
|
Use of Singleton Patter Without Synchronization in a Multithreaded Context
|
Unynchronized Access to Shared Data in a Multithreaded Context
|
fi|nalize(@) *Method Wit!hout super.finalize()
|
Call instead Thread run() to of start()
|
EJB Bad Practices: Synchronization of Use Primitives
|
EJBBadPractices:UseofAWTSwing
|
EJB ?Bad Practices?: Use of J%av?a I/O
|
EJB Bad Practices: of Use Sockets
|
EJB Bad Practices: Use of *lass Lo@der
|
J2EE Bad Practices: Non-serializable Object Stord in Session
|
Without Method clone() super.clone()
|
Object Model Violation:* Just One o!f Equals and Hashcode Def$ine#d
|
Declared Array Public
|
finalize() Metho@ Declared Public
|
EmptySynchronizedBlock
|
ExplicitCalltoFinalize()
|
J2EE Framwork: Saving Unserializable Objects to Disk
|
ComparisonofObjectReferencesInsteadofObjectContents
|
Public Stat*c Final Field References Mutable Object
|
Struts: Non-private Field in ctionForm Class
|
Doub?e-Checked Locking
|
CriticalDataElementDeclaredPublic
|
Access to Critical Method Variable via Public Private
|
Improper Neutralization of Special| El!ements used in an E|xpression Language Statement ('Expression La*nguage Injection')
|
Incorrect Use of Autoboxing and Critical for Performance Unboxing Operations
|
Inc*orrect Bitwise S*hift o|f I?nteger
|
Improper Neuralization of Special Elements Used in a Template Engine
|
Multiple Same of Releases Resource or Handle
|
Covert Timing Channel
|
Symbolic Name not Mapping to Correct Object
|
Detection of Error Condition Without Action
|
Unchecked Error Condition
|
Missing Report of Error Condition
|
Return of Wrong Status Code
|
Unexpected Status Code or Return Value
|
Use of NullPointerException Catch to Detect NULL Pointer Dereference
|
Declaration of Catch for Generic Exception
|
Declaration of Throws for Generic Exception
|
Uncontrolled Resource Consumption
|
Missing Release of Memory after Effective Lifetime
|
Transmission of Private Resources into a New Sphere ('Resource Leak')
|
Exposure of File Descriptor to Unintended Control Sphere ('File Descriptor Leak')
|
Improper Resource Shutdown or Release
|
Asymmetric Resource Consumption (Amplification)
|
Insufficient Control of Network Message Volume (Network Amplification)
|
Inefficient Algorithmic Complexity
|
Incorrect Behavior Order: Early Amplification
|
Improper Handling of Highly Compressed Data (Data Amplification)
|
Insufficient Resource Pool
|
Unrestricted Externally Accessible Lock
|
Improper Resource Locking
|
SecuTable: A Dataset for Semantic Table Interpretation in Security Domain
Dataset Overview
Security datasets are scattered on the Internet (CVE, CAPEC, CWE, etc.) and provided in CSV, JSON or XML formats. This makes it difficult to get a holistic view of the interconnectedness of information across different data sources. On the other hand, many datasets focus on specific attack vectors or limited environments, limiting generalisability. There is a lack of detailed annotations in datasets, making it difficult to train supervised learning models.
To solve these limits, security data can be extracted from diverse data sources, organised using a tabular data format and linked to existing knowledge graphs (KGs). This is called Semantic Table Interpretation. The KGs schema will help align different terminologies and understand the relationships between concepts.
Although humans can manually annotate tabular data, understanding the semantics of tables and annotating large volumes of data remains complex, resource-heavy and time-consuming. This has led to scientific challenges such as Tabular Data to Knowledge Graph Challenge - SemTab https://www.cs.ox.ac.uk/isg/challenges/sem-tab/.
We provide in this repository the secu-table dataset. This dataset aims to provide a holistic view of security data extracted from security data sources and organized in tables. It is constructed using the pipeline presented by this figure: 
Dataset
The current version of the dataset consists of three releases:
- First release here contains the first dataset which was created. It is composed of 1135 tables.
- Second release is here consists of 1554 tables. This release is being used to evaluate the capabilities of open source LLMs to solve semantic table interpretation tasks during the SemTab challenge https://sem-tab-challenge.github.io/2025/ hosted by the 24th international semantic web conference (ISWC) 2025. It is composed of two folders. The first folder contains the ground truth, composed of 76 tables, corresponding to 8922 entities. This subset will allow people working with the secu-table dataset to see how the dataset annotation should be done.
Dataset evaluation
The evaluation was conducted by running several experiments using open source LLMs (Mistral, Falcon) and closed source LLM (GPT-4o mini) on the ground truth consisting of 76 tables by considering the three main tasks of semantic table interpretation:
- Cell Entity Annotation (CEA)
- Column Type Annotation (CTA)
- Column Property Annotation (CPA).
Prompts
For our experimentation we designed a set of different prompt to solve the STI tasks presented above.
- CPA prompts
## gpt4, mistral and Falcon prompt messages = [ { "role": "system", "content": "You are a helpful assistant on semantic table interpretation in Cybersecurity domain." "Your domain is to provide the uri of the data property or object property in the sepses knowledge graph for CWE entities" "please do not include any other text in your response, just give uri of the entity if you know it." "If you don't know the uri of the entity, please return 'I don't know' as the value." "I don't want any other text in your response, just the value(uri or I don't know)." "Here are few examples of your tasks: " "Question: Please, which SEPSes URI property has Name as value?" "http://w3id.org/sepses/vocab/ref/cwe#name" "Question: Please, which SEPSes URI property has abstraction as value?" "http://w3id.org/sepses/vocab/ref/cwe#abstraction" "Question: Please, which SEPSes URI property has Related Weaknesses as value?" "http://w3id.org/sepses/vocab/ref/cwe#hasRelatedWeakness" }, { "role": "user", "content": f"\n{prompt}", }, ] - CTA prompt
## mistral and falcon prompt messages = [ { "role": "system", "content": "You are a helpful assistant on semantic table interpretation in Cybersecurity domain." "Your domain is to provide the uri of the entity in the sepses knowledge graph for CWE entities." "please do not include any other text in your response, just give uri of the entity if you know it." "If you don't know the uri of the entity, please return 'I don't know' or unable to find entity to wikidata knowledge graph, please return NIL as the value." "I don't want any other text in your response, just the value(uri or NIL or I don't know)." "don't include explanation or any other text, Note that the answer should be only in the three case above." "Here are few examples of your tasks: " "Question: Please what is sepses uri of the entity type of these entities: ['94', '59', 'CWE-ID', '200']" "http://w3id.org/sepses/vocab/ref/cwe#CWE" "Question: Please what is sepses uri of the entity type of these entities: ['94', '59', '787', '200']" "http://w3id.org/sepses/vocab/ref/cwe#CWE" "Question: Please what is sepses uri of the entity type of these entities: ['Alternate Terms']" "http://w3id.org/sepses/vocab/ref/cwe#ModeOfIntroduction" }, { "role": "user", "content": f"\n{prompt}", }, ] # GPT prompt messages = [ { "role": "system", "content": "You are a helpful assistant on semantic table interpretation in Cybersecurity domain." "Respond with a JSON object containing the key 'response'." "please do not include any other text in your response, just give uri othe entity if you know it." "If you don't know the uri of the entity, please return 'I don't know' athe value." "Provide your answer without Justification, notes, etc. Only the answer is required." "Here are few examples of your tasks: " "Question: Please what is sepses uri of the entity type of these entities: ['94', '59', 'CWE-ID', '200']" "http://w3id.org/sepses/vocab/ref/cwe#CWE" "Question: Please what is sepses uri of the entity type of these entities: ['94', '59', '787', '200']" "http://w3id.org/sepses/vocab/ref/cwe#CWE" "Question: Please what is sepses uri of the entity type of these entities: ['Alternate Terms']" "http://w3id.org/sepses/vocab/ref/cwe#ModeOfIntroduction" }, { "role": "user", "content": f"\n{prompt}", }, ] - CEA prompt
CEA wikidata prompt
## gpt-4o-mini, mistral and falcon prompt messages = [ { "role": "system", "content": "You are a helpful assistant on semantic table interpretation in Cybersecurity domain." "Your domain is to provide the uri of the entity in the wikidata knowledge graph for CWE entities." "please do not include any other text in your response, just give uri of the entity if you know it." "If you don't know the uri of the entity, please return 'I don't know' as the value." "I don't want any other text in your response, just the value(uri or I don't know)." "don't include explanation or any other text, Note that the answer should be only in the three case above." "Here are few examples of your tasks: " "Question: Please what is wikidata uri of Improper Input Validation entity?" "http://www.wikidata.org/entity/Q6007765" "Question: Please what is wikidata uri of Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') entity?" "http://www.wikidata.org/entity/Q442856" }, {"role": "user", "content": f"{prompt}"}, ]CEA Sepses prompts
In the first set of experiments, we consider only the fact that the LLMs can reply to the question without considering selective prediction as presented in this picture:

In the second set of experiments we consider the fact that the LLMs consider to say "I don't know" as seen in this picture:
.
Results
This section presents the performance of three(03) LLMs: Mistral, Falcon3 and gpt-4o-mini, on three STI tasks(CEA, CPA, CTA) within the cybersecurity domain using both wikidata and Sepses as knowledge graphs. It should be noted that, for wikidata only CEA task were perfomed.
CPA task results
This task consists of linking the relationship between two entities in the table to its corresponding property in the SEPSES knowledge graph. The following table summarizes the baseline results obtained for this task
| Model | Precision | Recall | F1-score |
|---|---|---|---|
| Mistral | 0.403 | 0.400 | 0.402 |
| GPT-4o mini | 0.505 | 0.502 | 0.504 |
| Falcon3-7b-instruct | 0.436 | 0.433 | 0.435 |
CTA task results
This task consists of linking the entity type of a of column to its corresponding type in the SEPSES knowledge graph. The baseline results obtained for this task are presented in the following table
| Model | Precision | Recall | F1-score |
|---|---|---|---|
| Mistral | 0.119 | 0.119 | 0.119 |
| GPT-4o mini | 0.143 | 0.143 | 0.143 |
| Falcon3-7b-instruct | 0.133 | 0.133 | 0.133 |
CEA task results
This task consist of linking cell entity in the table to its corresponding in the Knowledge graph. For this task we used both wikidata and Sepses KGs.
Results with wikidata KG
This table show the performance of the LLMs used to perfomed the CEA task using wikidata as KG.
| Model | Precision | Recall | F1-score |
|---|---|---|---|
| Mistral | 0.011 | 0.011 | 0.011 |
| GPT-4o mini | 0.014 | 0.014 | 0.014 |
| Falcon3-7b-instruct | 0.013 | 0.013 | 0.013 |
Results with Sepses KG
The results are divided into two parts: the first part presents the results without selective prediction, and the second part presents the results with selective prediction.
Results without Selective Prediction The results without selective prediction are presented in the following tables. The tables show the performance of the LLMs on the CEA tasks with sepses knowledge graph.
Precision Recall F1 Score Mistral 0.109 0.109 0.109 gpt-4o-mini 0.219 0.219 0.219 falcon3-7b-instruct 0.319 0.319 0.319
Results with Selective Prediction
The results with selective prediction are presented in the following tables. The tables show the performance of the LLMs on the CEA tasks with sepses knowledge graph.
Precision Recall F1 Score Mistral 0.0019 0.0019 0.0019 gpt-4o-mini 0.0154 0.0154 0.0154 falcon3-7b-instruct 0.0087 0.0087 0.0087 This tables show the performance of the LLMs for the SP score by considering the fact that the LLMs can say "I don't know" when they do not know the answer.
Coverage Mistral 0.252 gpt-4o-mini 0.456 falcon3-7b-instruct 0.270
Artifacts
The code for reproductibility is available on: Secutable repository
Citations
- Downloads last month
- 1,127