Dataset Viewer
Auto-converted to Parquet Duplicate
sentences
listlengths
1
61
coref_chains
sequencelengths
0
44
id
stringlengths
7
14
[ { "detokenized_text": "์—๋‘์•„๋ฅด๋“œ ๋ฆฌ๋ชจ๋…ธํ”„(ะญะดัƒะฐั€ะด ะ›ะธะผะพะฝะพะฒ)๋Š” ๋Ÿฌ์‹œ์•„์˜ ์œ ๋ช… ์ž‘๊ฐ€์ž ์ •์น˜์ธ์ด๋‹ค.", "index": 0, "token_positions": [ [ 0, 5 ], [ 6, 10 ], [ 10, 11 ], [ 11, 17 ], [ 18, 25 ], ...
[ [ [ 0, 0, 1 ], [ 2, 32, 32 ], [ 6, 33, 33 ], [ 7, 24, 24 ], [ 8, 7, 7 ], [ 9, 0, 0 ], [ 10, 3, 3 ], [ 10, 5, 5 ...
1726968.json
[ { "detokenized_text": "ใ€Š๊ณต๋ช…์˜ ๊ฐˆ๋ฆผ ๊ธธใ€‹ ( )์€ 2006๋…„ 1์›” 8์ผ๋ถ€ํ„ฐ 2006๋…„ 12์›” 10์ผ๊นŒ์ง€ ๋ฐฉ์˜๋˜ใ„ด, NHK ๋ฐฉ์†ก์‚ฌ์˜ 45๋ฒˆ์งธ ๋Œ€ํ•˜ ๋“œ๋ผ๋งˆ๋‹ค.", "index": 0, "token_positions": [ [ 0, 1 ], [ 1, 3 ], [ 3, 4 ], [ 5, 7 ], [ 8, ...
[ [ [ 0, 1, 4 ], [ 0, 33, 34 ], [ 8, 7, 10 ], [ 9, 0, 1 ], [ 10, 1, 2 ], [ 16, 6, 7 ], [ 18, 1, 2 ], [ 19, 21, 22 ...
809717.json
[ { "detokenized_text": "๋ผ์  ๋“œ๋ผ ํ”„๋ผ์‚ฌ๋“œ(Rajendra Prasad, เคฐเคพเคœเฅ‡เคจเฅเคฆเฅเคฐ เคชเฅเคฐเคธเคพเคฆ, 1884๋…„ 12์›” 3์ผ~1963๋…„ 2์›” 28์ผ)์€ ์ธ๋„์˜ ๋ฒ•์กฐ์ธ ๊ฒธ ๊ฒฝ์ œํ•™์ž์ž ๋ณ€ํ˜ธ์‚ฌ ๊ฒธ ์ €์ˆ ๊ฐ€๊ณ  ๋ฒ•ํ•™์ž ๋ฐ ๋…๋ฆฝ ์šด๋™๊ฐ€ ๊ฒธ ์ •์น˜๊ฐ€๋ฉฐ ๅ‰ ์ธ๋„ ์ดˆ๋Œ€ ๋Œ€ํ†ต๋ น์ด๋‹ค.", "index": 0, "token_positions": [ [ 0, 4 ], [ 5, 9 ], [ 9, 10 ...
[ [ [ 0, 0, 1 ], [ 2, 0, 0 ], [ 3, 1, 1 ] ], [ [ 0, 24, 24 ], [ 1, 4, 4 ], [ 2, 9, 9 ], [ 3, 10, 10 ] ], [ [ 0, ...
1448366.json
[ { "detokenized_text": "๋ฐ”์šฐ ์™€์šฐ(Bow Wow, 1987๋…„ 3์›” 9์ผ~)์€ ๋ฏธ๊ตญ์˜ ๋žฉ ๊ฐ€์ˆ˜, ํ”„๋กœ๋“€์„œ, ๋ฐฐ์šฐ๋‹ค.", "index": 0, "token_positions": [ [ 0, 2 ], [ 3, 5 ], [ 5, 6 ], [ 6, 9 ], [ 10, 13 ], ...
[ [ [ 0, 0, 1 ], [ 5, 0, 0 ] ], [ [ 2, 18, 20 ], [ 6, 0, 2 ], [ 7, 6, 6 ], [ 8, 7, 7 ] ], [ [ 7, 0, 1 ], [ 7, 16...
299933.json
[ { "detokenized_text": "์œค๋ฏธ๋ž˜(ๅฐนๆœชไพ†, 1981๋…„ 5์›” 31์ผ~)์€ ํ•œ๊ตญ๊ณ„ ๋ฏธ๊ตญ์ธ ํž™ํ•ฉ ๋ฐ R &B ๊ฐ€์ˆ˜๋‹ค.", "index": 0, "token_positions": [ [ 0, 3 ], [ 3, 4 ], [ 4, 7 ], [ 7, 8 ], [ 9, 13 ], [...
[ [ [ 0, 0, 0 ], [ 1, 2, 4 ], [ 1, 6, 8 ], [ 1, 13, 13 ], [ 4, 0, 0 ], [ 4, 2, 3 ], [ 6, 5, 5 ], [ 8, 13, 13 ]...
165261.json
[ { "detokenized_text": "์ œ๋ฌผํฌ ์กฐ์•ฝ(ๆฟŸ็‰ฉๆตฆๆข็ด„,)๋Š” 1882๋…„ 8์›” 30์ผ(๊ณ ์ข… 19๋…„ ์Œ๋ ฅ 7์›” 17์ผ)์ž„์˜ค ๊ตฐ๋ž€์˜ ์‚ฌํ›„ ์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด ์กฐ์„ ๊ณผ ์ผ๋ณธ ์ œ๊ตญ ์‚ฌ์ด์— ์ฒด๊ฒฐ๋˜ใ„ด ๋ถˆํ‰๋“ฑ ์กฐ์•ฝ์ด๋‹ค.", "index": 0, "token_positions": [ [ 0, 3 ], [ 4, 6 ], [ 6, 7 ], [ 7, 12 ], ...
[ [ [ 0, 0, 1 ], [ 0, 40, 42 ], [ 1, 0, 2 ], [ 13, 4, 5 ] ], [ [ 0, 7, 8 ], [ 0, 14, 16 ] ], [ [ 0, 9, 12 ], [ 0, ...
48682.json
[{"detokenized_text":"๊น€์ƒํ˜„(้‡‘็›ธ่ณข, 1980๋…„ 11์›” 12์ผ~)์€ KBO ๋ฆฌ๊ทธ ์ €๋‹ˆ๋งจ ์™ธ์ธ ๊ตฌ(...TRUNCATED)
[[[0,0,0],[6,5,5],[7,1,1],[8,0,0],[8,30,30],[9,5,5],[11,4,4],[14,0,0],[14,9,9]],[[0,13,14],[23,8,9]](...TRUNCATED)
168405.json
[{"detokenized_text":"๊ฐ€์—˜ ๋น„์ง€ ๋ฆฌ๋งˆ๋‚˜(Gaรซl Bigirimana, ํ‚ค๋ฃฌ๋””์–ด ๋ฐœ์Œ:], 1993๋…„ 10์›”(...TRUNCATED)
[[[0,0,2],[4,11,11],[7,13,13]],[[0,21,21],[10,27,27]],[[3,4,5],[4,0,1],[4,13,17]],[[8,0,1],[8,10,11](...TRUNCATED)
830233.json
[{"detokenized_text":"์•„ํด๋กœ ์•ˆํ†ค ์˜ค๋…ธ(, 1982๋…„ 5์›” 22์ผ~)์€ ์ผ๋ณธ๊ณ„ ๋ฏธ๊ตญ์ธ์œผ๋กœ (...TRUNCATED)
[[[0,0,2],[8,17,19],[10,0,1],[10,60,60],[13,0,0],[14,18,18],[15,43,43]],[[0,14,17],[0,19,19],[4,2,2](...TRUNCATED)
36262.json
[{"detokenized_text":"2013๋…„ NC ๋‹ค์ด๋…ธ์Šค ์‹œ์ฆŒ์€ KBO ๋ฆฌ๊ทธ์—์„œ ๊น€๊ฒฝ๋ฌธ ๊ฐ๋…์ด ์ด๋„(...TRUNCATED)
[[[0,0,4],[0,16,17],[0,19,21],[3,4,5],[17,15,16],[24,17,18]],[[5,17,18],[13,10,12],[21,10,12]],[[6,0(...TRUNCATED)
963277.json
End of preview. Expand in Data Studio

Detokenizes the ECMT dataset using the kiwipiepy library.

The script used to convert the dataset is here: https://gist.github.com/ianporada/a246ebf59696c6e16e1bc1873bc182a4

The library version used is kiwipiepy==0.20.3 / kiwipiepy_model==0.20.0

The dataset schema is as follows:

{
  # the original document filename
  "doc_id": str,

  # a list of sentences in the document
  "sentences": [
    "index": int, # the index of the sentence within the document
    "detokenized_text": str, # a single string representing the text of the sentence (detokenized using kiwipiepy)

    # a list of token positions which are tuples of the form (start, end)
    # the token at index i corresponds to characters detokenized_text[start:end]
    "token_positions": [(int, int), ...],

    # the original values of each token from the dataset
    "tokens": [{"index": int, "text": str, "xpos": str}, ...], 
  ],

  # a list of coreference chains, each chain is a list of mentions
  # each mention is a list of form [sentence_index, start_token_index, end_token_index] where token indices are inclusive indices within the given sentence
  "coref_chains": [[[int, int, int], ...], ...]
}
Downloads last month
7