Language resource #: 3330 Results 1961 - 1970 of 2023
Current query
Input keywords
Select items
  • C-005006: EUROPARL Corpus Parallel Corpora: Portuguese-English
    The EUROPARL Corpus (Portuguese-English subpart of the parallel corpora), was extracted from the proceedings of the European Parliament. It contains transcriptions of sessions dating back from 1996 to 2011, with a total of approximately 58,324,562 tokens of European Portuguese (L1) and 49,216,896 tokens of English (translation).

    The EUROPARL Corpus is composed of one text file for the English corpus and two files for the Portuguese version: a text file and an annotated file. The text version contains plain text and no further annotation. The Portuguese annotated file is a four-column file with one token per line, followed by a PoS tag and a lemma. The corpus was automatically PoS-tagged with MBT tagger (http://ilk.uvt.nl/mbt/), and lemmatized with MBLEM (http://ilk.uvt.nl/mbma/), following the annotation scheme of the Corpus of Reference of Contemporary Portuguese.
  • C-005007: 2010 NIST Speaker Recognition Evaluation Test Set
    *Introduction*

    2010 NIST Speaker Recognition Evaluation Test Set was developed by the Linguistic Data Consortium (LDC) and NIST (National Institute of Standards and Technology). It contains 2,255 hours of American English telephone speech and speech recorded over a microphone channel involving an interview scenario used as test data in the NIST-sponsored 2010 Speaker Recognition Evaluation (SRE).

    The ongoing series of SRE yearly evaluations conducted by NIST are intended to be of interest to researchers working on the general problem of text independent speaker recognition. To this end the evaluations are designed to be simple, to focus on core technology issues, to be fully supported and to be accessible to those wishing to participate.

    The 2010 evaluation was similar to the 2008 evaluation by including in the training and test conditions for the core test not only conversational telephone speech (CTS) recorded over ordinary telephone channels, but also CTS and conversational interview speech recorded over a room microphone channel. Unlike prior evaluations, some of the conversational telephone style speech was collected in a manner to produce particularly high, or particularly low, vocal effort on the part of the speaker of interest.

    *Data*

    The speech recordings in this release were collected in 2009 and 2010 by LDC at its Human Subjects Collection facility in Philadelphia. This collection was part of the Mixer 6 project, which was designed to support the development of robust speaker recognition technology by providing carefully collected and audited speech from a large pool of speakers recorded simultaneously across numerous microphones.

    The telephone speech segments include two-channel excerpts of approximately 10 seconds and 5 minutes. There are also summed-channel excerpts in the range of 5 minutes. The microphone excerpts are 3-15 minutes in duration. As in prior evaluations, intervals of silence were not removed. The data included in this release is 8 bit ulaw with a sample rate of 8000.

    In addition to evaluation data, this package also consists of answer keys, trial and train files, development data and evaluation documentation.
  • C-005008: CHiME2 Grid
    *Introduction*

    CHiME2 Grid was developed as part of The 2nd CHiME Speech Separation and Recognition Challenge and contains approximately 120 hours of English speech from a noisy living room environment. The CHiME Challenges focus on distant-microphone automatic speech recognition (ASR) in real-world environments.

    CHiME2 Grid reflects the small vocabulary track of the CHiME2 Challenge. The target utterances were taken from the Grid corpus and consist of 34 speakers reading simple 6-word sequences.

    *Data*

    Data is divided into training, development and test sets. All data is provided as 16 bit WAV files sampled at 16 kHz. The noisy utterances are provided both in isolated form and in embedded form. The latter either involve five seconds of background noise before and after the utterance (in the training set) or they are mixed in continuous five minute noise background recordings (in the development and test sets). Seven hours of noise background not part of the training set are also included. The data is accompanied by one annotation file per speaker that includes additional technical information.

    Also included is a baseline Hidden Markov Model (HMM)-based speech recogniser and a scoring tool designed for the 2nd CHiME Challenge to allow users to obtain keyword recognition scores from formatted result files, perform recognition and score the challenge data, and estimate parameters of speaker dependent HMMs.
  • C-005009: NPChunks
    NPChunks is a training corpus containing approximately 1,000 sentences, with a total of 24,243 tokens, selected randomly from the written part of the CINTIL corpus. For more information on the CINTIL corpus, see ELRA-W0050, ISLRN: 176-775-844-396-0.

    The corpus is PoS-annotated at token level, including punctuation. Noun Phrases were recognized and annotated with specific tags. It was automatically PoS-tagged with MBT tagger (http://ilk.uvt.nl/mbt/), and lemmatized with MBLEM (http://ilk.uvt.nl/mbma/), following the annotation scheme of the Corpus of Reference of Contemporary Portuguese. YamCha software (http://chasen.org/~taku/software/yamcha/) was used to recognize chunks that consist of Noun Phrases and to identify the elements appearing at the beginning, in the middle and at the end of a noun phrase.
  • C-005010: ROMBAC - Romanian balanced corpus
    ROMBAC is a Romanian corpus containing equal shares of texts from 5 different genres: journalism, legalese, fiction, medicine and biographical data for Romanian literary personalities. For each genre, texts have been selected containing around 7,000,000 words, so that the entire corpus counts around 41,000,000 words, including punctuation.

    The corpus is annotated at paragraph, sentence, constituent group and word levels. It provides morpho-syntactic information (MSD) which has been assigned automatically with the high accuracy TTL tagger (accuracy is at least 98%), which implements the tiered tagging methodology. About 20% of the MSDs have been manually checked, validated and, where the case, corrected.
    MSDs follow the Multext-East specifications. For Romanian there are 614 different MSDs. They have been slightly modified (new tags for named entities have been added).
  • C-005012: ROCO Romanian journalistic corpus
    ROCO is a Romanian journalistic corpus containing approximately 7.1 million tokens, the number of types being 231,626. It is rich in proper names, numerals and named entities.

    The corpus contains morphosyntactic information (MSD annotations) which has been assigned automatically with the high accuracy (estimated 98%) TTL tagger implementing the tiered tagging methodology. About 20% of the MSD annotations have been manually checked, validated and, where the case, corrected. MSDs follow the Multext-East specifications. For Romanian there are 614 different MSDs. They have been slightly modified (new tags for named entities have been added).

    The corpus was first segmented, then PoS annotated and lemmatized with the TTL processing chain. The corpus has been XML encoded and each file includes metadata (cesHeader).
  • C-005013: Arboretum treebank
    The Arboretum treebank is a morphologically and syntactically annotated repository of Danish sentences, taken from Korpus 90 and Korpus 2000, both compiled by the Society for Danish Language and Literature (http://ordnet.dk/korpusdk/fakta), and containing samples of written Danish from the 90'ies and from around the year 2000, respectively. The treebank consists of about 425,000 tokens. There are ca. 22,260 sentences/utterances containing 3 or more tokens.

    In a first pass, all material was tokenized and tagged with the DanGram parser, using hand-written Constraint Grammar rules. In a next stage, the parser's dependency grammar and constituent conversion was applied to produce full syntactic tree structures. The automatic annotation was then revised both at the morphosyntactic and the structural levels, with iterative improvements made to the parser at the same time.

    Arboretum provides named entity categories for all proper nouns. It also contains subclass categorisation for the pronoun and adverb word classes, facilitating conversion to different descriptive traditions. In addition, the dependency version contains structural markers concerning coordination and clause boundaries, as well as some morphological information concerning compounding.

    The final version of the treebank consists of two independent versions, constituent trees and dependency trees, and is distributed in the following versions:
    1. Native dependency format (Constraint Grammar format)
    2. Dependency annotation converted to MALT xml format
    3. Native constituent tree format (Cross-language VISL standard)
    4. Constituent format converted to TIGER xml
  • C-005016: BOLT Chinese Discussion Forum Parallel Training Data
    *Introduction*

    BOLT Chinese Discussion Forum Parallel Training Data was developed by the Linguistic Data Consortium (LDC) and consists of 1,876,799 tokens of Chinese discussion forum data collected for the DARPA BOLT program along with their corresponding English translations.

    The BOLT (Broad Operational Language Translation) program developed machine translation and information retrieval for less formal genres, focusing particularly on user-generated content. LDC supported the BOLT program by collecting informal data sources -- discussion forums, text messaging and chat -- in Chinese, Egyptian Arabic and English. The collected data was translated and annotated for various tasks including word alignment, treebanking, propbanking and co-reference.

    *Data*

    The source data in this release consists of discussion forum threads harvested from the Internet by LDC using a combination of manual and automatic processes. The full source data collection is released as BOLT Chinese Discussion Forums (LDC2016T05). Word-aligned and tagged data is released as BOLT Chinese-English Word Alignment and Tagging - Discussion Forum Training (LDC2016T19).

    Data was manually selected for translation according to several criteria, including linguistic features and topic features. The files were then segmented into sentence units, formatted into a human-readable translation format and assigned to translation vendors. Translators followed LDC's BOLT translation guidelines. Bilingual LDC staff performed quality control procedures on the completed translations.

    All data are presented as UTF-8. The following table shows the data volume of this package:
    Source Lang Genre Files Source Tokens Target Tokens
    Chinese Discussion Forum 1,541 1,876,779 1,557,873

    *Acknowledgement*

    This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR0011-11-C-0145. The content does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.
  • C-005017: BOLT Egyptian Arabic SMS/Chat and Transliteration
    *Introduction*

    BOLT Egyptian Arabic SMS/Chat and Transliteration was developed by the Linguistic Data Consortium (LDC) and consists of naturally-occurring Short Message Service (SMS) and Chat (CHT) data collected through data donations and live collection involving native speakers of Egyptian Arabic. The corpus contains 5,691 conversations totaling 1,029,248 words across 262,026 messages. Messages were natively written in either Arabic orthography or romanized Arabizi. A total of 1,856 Arabizi conversations (287,022 words) were transliterated from the original romanized Arabizi script into standard Arabic orthography.

    The BOLT (Broad Operational Language Translation) program developed machine translation and information retrieval for less formal genres, focusing particularly on user-generated content. LDC supported the BOLT program by collecting informal data sources -- discussion forums, text messaging and chat -- in Chinese, Egyptian Arabic and English. The collected data was translated and annotated for various tasks including word alignment, treebanking, propbanking and co-reference.

    *Data*

    The data in this release was collected using two methods: new collection via LDC's collection platform, and donation of SMS or chat archives from BOLT collection participants. All data collected were reviewed manually to exclude any messages/conversations that were not in the target language or that had sensitive content, such as personal identifying information (PII).

    A portion of the source conversations containing Arabizi tokens were automatically transliterated into Arabic script. Once the Arabizi source was transliterated into Arabic script automatically, LDC annotators reviewed, corrected and normalized the transliteration according to "Conventional Orthography for Dialectal Arabic" (CODA). All data is presented in XML.

    *Acknowledgement*

    This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR0011-11-C-0145. The content does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.
  • C-005018: GALE English-Chinese Parallel Aligned Treebank -- Training
    *Introduction*

    GALE English-Chinese Parallel Aligned Treebank -- Training was developed by the Linguistic Data Consortium (LDC) and contains 196,123 tokens of word aligned English and Chinese parallel text with treebank annotations. This material was used as training data in the DARPA GALE (Global Autonomous Language Exploitation) program.

    Parallel aligned treebanks are treebanks annotated with morphological and syntactic structures aligned at the sentence level and the sub-sentence level. Such data sets are useful for natural language processing and related fields, including automatic word alignment system training and evaluation, transfer-rule extraction, word sense disambiguation, translation lexicon extraction and cultural heritage and cross-linguistic studies. With respect to machine translation system development, parallel aligned treebanks may improve system performance with enhanced syntactic parsers, better rules and knowledge about language pairs and reduced word error rate.

    The English source data was translated into Chinese. Chinese and English treebank annotations were performed independently. The parallel texts were then word aligned. The material in this release corresponds to portions of the treebanked data in OntoNotes 3.0 (LDC2009T24) and OntoNotes 4.0 (LDC2011T03).

    *Data*

    This release consists of English source broadcast programming (CNN, NBC/MSNBC) and web data collected by LDC in 2005 and 2006. The distribution by genre, words, character tokens, treebank tokens and segments appears below:
    Genre Files Words CharTokens CTBTokens Segments
    bc 6 60,0061 90,092 62,438 3,763
    wb 15 70,687 106,031 69,309 3,238
    Total 21 130,748 196,123 131,747 7,001

    Note that all token counts are based on the Chinese data only. One token is equivalent to one character and one word is equivalent to 1.5 characters.

    The word alignment task consisted of the following components:

    Identifying, aligning, and tagging eight different types of links
    Identifying, attaching, and tagging local-level unmatched words
    Identifying and tagging sentence/discourse-level unmatched words
    Identifying and tagging all instances of Chinese 的 (DE) except when they were a part of a semantic link

    This release contains nine types of files - English raw source files, Chinese raw translation files, Chinese character tokenized files, Chinese CTB tokenized files, English tokenized files, Chinese treebank files, English treebank files, character-based word alignment files, and CTB-based word alignment files.