Add project report

This commit is contained in:
Oystein Kristoffer Tveit 2024-04-26 00:49:01 +02:00
parent e44148aa03
commit 4e030a510b
Signed by: oysteikt
GPG Key ID: 9F2F7D8250F35146
23 changed files with 696 additions and 0 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 206 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

View File

@ -0,0 +1,42 @@
import math
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import os
# Assumes 0 <= x <= 1
def sigmoid(x, slope=0.1, offset=0, max_x=1, flip=False) -> float:
assert x <= max_x
x = x - (max_x / 2) - (offset * max_x / 2)
s = 1 / (1 + math.exp(-x / slope))
return max_x - s if flip else s
curve_dir = os.path.dirname(__file__)
for name, f in [
('common', lambda x: sigmoid(x, slope=0.05, offset=-0.6, flip=True)),
('dialect', lambda x: sigmoid(x, slope=0.08, offset=-0.2)),
('kanji', lambda x: x ** 5),
('katakana', lambda x: 0 if x > 0.5 else 1),
('nhk', lambda x: sigmoid(x, slope=0.03, offset=-0.6, flip=True)),
('wordsum', lambda x: x),
]:
plt.rc('font', size=33)
plt.xlim(-0.05, 1.05)
plt.ylim(-0.05, 1.05)
plt.locator_params(nbins=2)
space = np.linspace(0, 1, 1000)
p = [f(n) for n in space] #
plt.plot(space, p, linewidth=5)
plt.savefig(f"{curve_dir}/{name}.png")
plt.clf()
plt.rc('font', size=33)
plt.xlim(-0.05, 24.05)
plt.ylim(-0.05, 1.05)
plt.locator_params(nbins=3)
space = np.linspace(0, 24, 1000)
p = [sigmoid(n, slope=1.4, max_x=24) for n in space] #
plt.plot(space, p, linewidth=5)
plt.savefig(f"{curve_dir}/sentence_length.png")

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 149 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 115 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 215 KiB

244
project_report/main.tex Normal file
View File

@ -0,0 +1,244 @@
\documentclass{article}[a4, 12pt]
\usepackage{ntnu-report}
\usepackage{amsmath}
\usepackage{xeCJK}
\setCJKmainfont{Noto Sans CJK JP}
\usepackage{booktabs}
\usepackage{array}
\usepackage{ruby}
\date{April 2023}
\title{TDT4130 - Text Analysis Project}
\addbibresource{references.bib}
\begin{document}
\include{./titlepage.tex}
\newpage
This article aims to explore the use of natural language processing to order Japanese sentences by their linguistic complexity.
In this paper, we provide an overview of the Japanese language and related work in the field, followed by a description of the architecture of our system.
We detail the datasets used, the methodology employed, and the evaluation of our system's performance.
\section{Introduction}
The problem we address in this article arose while developing a mobile dictionary app called Jisho-Study-Tool \citep{jst}. We faced a challenge when we needed to link example sentences to words in the dictionary, and arrange them in order. To overcome this challenge, we have utilized techniques and algorithms from natural language processing. In this article, we present our approach to solving this problem.
\section{Background}
\subsection{Japanese Language}
Japanese is a language that is very different from English. It employs three writing systems, namely, hiragana, katakana, and kanji. While hiragana and katakana has the same set of characters with different scripts, kanji is a logographic system that is heavily influenced by Chinese characters. Hiragana is generally utilized for native Japanese words, grammatical particles, and verb endings, whereas Katakana is used for loanwords from foreign languages, technical terms, and onomatopoeia. The common term for both of these is \textit{kana}. Kanji, on the other hand, tends to be used for words of Chinese origin such as nouns, adjectives, and verbs. Despite each word typically having a canonical way to be written, the language permits alternative ways of using the writing systems, sometimes for practical purposes, as well as for certain nuances or exceptional cases.
Kanji can have multiple pronunciations, which are usually classified into onyomi and kunyomi. While the difference between these are irrelevant for this article, the fact that there are several pronounciations for a single word brings some challenges. Additionally, the language has many homonyms, which are disambiguated through context when speaking and by using kanji when writing. These homonyms presents both advantages and disadvantages for this project. On one hand, it poses challenges when trying to disambiguate words due to the many words with the same pronunciation. On the other hand, it provides some dimensions lacking from english, that we can utilize to disambiguate the words. For example, some datasets include kana on top of some kanji, named \textit{furigana}. These are written in hiragana to aid in reading the kanji. We can use this in combination with a dictionary to further narrow down which sense of the word is being used.
\subsection{Word sense disambiguation}
Word sense disambiguation refers to the process of determining which specific meaning or usage of a word is being employed in a given context. This provides important semantic information that is useful in various natural language processing applications. In our case, it helps us gather statistics on the frequency of different word senses and identify common words. There are several algorithms for word sense disambiguation, but in this article, we will utilize a more traditional approach.
\subsection{TF-IDF}
Term frequency-inverse document frequency, commonly known as TF-IDF, is a popular text vectorization technique that converts raw text into a usable vector. This method combines two important concepts - Term Frequency (TF) and Document Frequency (DF) - to produce a comprehensive representation of text data.
Term frequency refers to the number of times a specific term appears in a document, which helps to determine the importance of that term within the document. By considering the term frequency of every word in a corpus, we can represent the text data as a matrix whose rows correspond to the number of documents and columns correspond to the number of distinct terms found across all documents. Document frequency, on the other hand, measures how many documents contain a specific term, providing insight into the commonality of a particular word across the entire corpus. Finally, the inverse document frequency (IDF) is a weight assigned to each term, which aims to reduce the importance of a term if its occurrences are distributed across all documents. IDF can be calculated using a formula that takes into account the total number of documents and the number of documents containing a particular term.
\section{Related work}
\subsection{Automatic Text Difficulty Classifier}
This article describes a system designed to assess the complexity of Portuguese texts, which is intended to provide language learners with texts that correspond to their skill level. To accomplish this, the system extracts 52 features that are grouped into seven categories: parts-of-speech (POS), syllables, words, chunks and phrases, averages and frequencies, and additional features. The system combines these features to calculate a value that represents the text's level of difficulty. The approach of using several features of different kinds is similar to the way we do it in this project. \citep{portuguese}
\subsection{Jisho.org}
Jisho is an online Japanese-English dictionary that offers a wide range of features for searching words, kanji, and example sentences. To accomplish this, Jisho integrates various data sources, including the Japanese-Multilingual Dictionary (JMDict) and the Tanaka Corpus, which will be explained further later on. One of the useful features of Jisho is the ability to provide example sentences to illustrate how a word is used in context. To achieve this, Jisho has employed a similar approach to their data aggregation, although not exactly the same. Although the source of their product is closed, some of the tools used in the process are publicly available. During the development of this project, their kana-romaji translator \citep{ve} has proven to be a valuable tool. Unfortunately, Jisho usually only provides one or two sentences per sense if any, so it is not as useful as a comparison.
\subsection{Surrounding Word Sense Model for Japanese All-words Word Sense Disambiguation}
This paper proposes a surrounding word
sense model (SWSM) that uses the distribution
of word senses that appear near ambiguous words for unsupervised all-words
word sense disambiguation in Japanese.
It is based around the idea that words with the same senses will often appear with the same surrounding words.
By utilizing dictionary data in addition to WORDNET-WALK, they have created an engine which is more accurate than existing supervising. models.
This could be used in combination with this project to make it more accurate in the future.
\citep{swsm}
\section{Architecture}
\subsection{Datasets}
\subsubsection{JMDict}
JMDict is a publicly available Japanese to multilingual dictionary developed by Jim Breen and his associates at the Electronic Dictionary Research and Development Group (EDRDG). The dictionary has various types of information such as kanji, readings, word senses, and more. It also includes rare information like different newspaper indices for the different word senses, and the origins of loan words. This resource is valuable to us since it provides a predetermined wordlist that we can use to link our sample sentences. Additionally, JMDict can be utilized as a query tool to examine relationships between words and senses. \citep{jmdict}
\subsubsection{Tanaka corpus}
The Tanaka corpus is a compendium of sentences that includes an English-translated version for most of them. This compilation was put together by Asuhito Tanaka, who is a professor at Hyogo University. Originally, the corpus was created by assigning the task of collecting 300 sentence pairs to Professor Tanaka's students. After several years, they had collected 212,000 sentence pairs. In 2002, the EDRDG started to work on creating links to the entries in JMDict. In 2006, them maintanership of the corpus was incorporated into the Tatoeba project. The current version of the corpus released by the EDRDG comes preprocessed with lemmatizations, furigana, and other supplementary data.
\citep{tanaka-corpus}
\subsubsection{NHK Easy News}
JMDict contains a wealth of information on the frequency of words in the Japanese language. However, some of these statistics are derived from Japanese newspapers, which are renowned for being challenging even for learners in the advanced stages.
Fortunately, Japan's state media, NHK, publishes a newspaper that is designed for learners. This is a valuable resource since we can be certain that every word in this corpus is suitable for learners. Therefore, we will utilize this corpus to construct a new index, which we can use to determine whether a word is suitable for learners.
\subsection{Methodology}
\subsubsection{Data ingestion}
The first task was to ingest and preprocess the data from the different sources.
For this, we chose to use an SQL database, because it provides us with an easy way of storing temporary result and quickly retrieving entries for complex queries.
By reading the document type definition \citep{xmldtd} of the JMDict XML-file, we were able to construct most of the schema of the database. Some parts of the schema was never used, so there is a bit of data loss in this process.
NHK News publishes an official index of the last articles from the past year at \url{http://www3.nhk.or.jp/news/easy/news-list.json}. We have used this to be able to download those articles, and then scrape them for content with an HTML parser. Afterwards, this was also put into the SQL database.
The sentences from the Tanaka Corpus were ingested in a similar manner.
\subsubsection{Word Sense Disambiguation}
Both corpora contain elements that can facilitate the disambiguation process.
The Tatoeba sentences are already partially annotated with lemmatizations, furigana (which denotes the spelling of the kanji), and at times, even the JMDict identifier. However, the sense disambiguation process is limited to a specific entry. Here, we could have used SWSM in an attempt to further disambiguate the word to its senses listed as listed in the dictionary.
The NHK Easy news corpus doesn't have these kinds of annotations. To solve this problem, we use a combination of the furigana from the corpus, MeCab to analyze the words and get POS tags, and a prioritized list of how to search for the correct meaning of the word. We created a mapping from the MeCab part-of-speech tags to the JMDict tags. The first word that fits based on its existing level of commonality data, and which is also the most likely match is chosen as the match. If no matches are found, the word is not added to the list of connected entries.
This approach may have a limitation where it could make some frequently used words appear even more frequent than they actually are. As a result, some words that are commonly used, but not as much as their similar counterparts, could be wrongly classified as very rare because they don't seem to appear in the NHK Corpora.
\subsubsection{TF-IDF}
TF-IDF is often used as a tool to estimate how meaningful a word is for one document in a corpus.
However, here we want the opposite measure. We are not looking for the words that give the documents most of its meaning, but rather the words which are more common across several documents.
If a token only has a high frequency in one of the documents, then there is a high chance that the word is field specific to this document only.
Because of this, we are going to change the formula to give us the averaged term frequency times the document frequency.
\begin{align*}
AVG(TF) &= \frac{AVG \left(\text{Occurences of term in document} \right) }{\text{Amount of terms in document}} \\[2ex]
DF &= \frac{\text{Count of documents where term exists}}{\text{Document Count}} \\[2ex]
\text{TF-DF} &= AVG(TF) \cdot DF \\
\end{align*}
We then went over the NHK Easy News corpus and collected the ``TF-DF'' values from here. These were then normalized to be in $[0, 1]$
\subsubsection{Determining word and sentence difficulty}
At this point, there are a lot of potential factors available to work with. To organize sentences properly, we need to determine how hard the words and sentences are to understand by aggregating some of these factors. We have picked a few factors that we believe are useful to determine the difficulty values, but the chosen curvatures and weights are just based on trial and error, and educated guesses.
Figure \ref{fig:wordfactors} shows how the different factors contribute to a words difficulty.
The sentence factors are listed in Figure \ref{fig:sentencefactors}.
\newcommand{\curveDiagramWidth}{0.15\linewidth}
\begin{figure}[H]
\begin{tabular}{ m{3cm} m{1cm} l m{7cm} }
\toprule
Factor & \% & Curve & Notes and reasoning \\
\midrule
$\frac{\sum \text{difficulty}(\text{word})}{\text{length of sentence}}$
& 50\%
& \includegraphics[width=\curveDiagramWidth]{graphics/curves/wordsum.png}
& This is the aggregated value based on the calculation in Figure \ref{fig:wordfactors}. As the values should be decently curved already, they are left unaffected. We also believe that this should have a lot more effect on the sentence than the other two factors. \\
\midrule
$\text{max}(\text{difficulty}(\text{word}))$
& 20\%
&
& The hardest word in the sentence can be the word that makes the whole sentence useless for a learner. Because of that, we make the hardest word in the sentence its own factor. \\
\midrule
Sentence length
& 30\%
& \includegraphics[width=\curveDiagramWidth]{graphics/curves/sentence_length.png}
& Until a sentence reaches around 12 words, it should be regarded as quite easy. But once it surpasses that, it becomes more difficult. \\
\bottomrule
\end{tabular}
\caption{Contributing factors to a sentences difficulty}
\label{fig:sentencefactors}
\end{figure}
\begin{figure}[H]
\begin{tabular}{ m{3cm} m{1cm} c m{7cm} }
\toprule
Factor & \% & Curve & Notes and reasoning \\
\midrule
Common ratings
& 25\%
& \includegraphics[width=\curveDiagramWidth]{graphics/curves/common.png}
& The different existing ratings of the word are summed together and linearly squished into $[0, 1]$. If the entry is included in more than one or two indices, it can be assumed that it is quite a common word, and should be marked as very easy. \\
\midrule
Dialects
& 10\%
& \includegraphics[width=\curveDiagramWidth]{graphics/curves/dialect.png}
& This is the sum of all readings which are marked as dialect. If a word has more than roughly 30\% dialect readings, we assume that it is a very dialect specific word. This should increase its difficulty. \\
\midrule
Most difficult kanji
& 25\%
& \includegraphics[width=\curveDiagramWidth]{graphics/curves/kanji.png}
& The input here is the elementary school grade in which the kanji is thaught, where grade 7 is the rest of the \ruby{常用}{jouyou}\ kanji \citep{jouyou}, and 8 are everything else. Usually, grade 1-6 means that the word is easy, grade 7 is intermediate-difficult, and 8 is extremely difficult. There is an edge case here, where a word has a set of really difficult kanji, but they are usually not used. These come pretagged as such, and are removed from the calculation. \\
\midrule
Katakana word
& 15\%
& \includegraphics[width=\curveDiagramWidth]{graphics/curves/katakana.png}
& If a word only contains katakana, there is a good chance that it is a loanword from english. This is usually a clear cut case, but some words have alternative kanji that are rarely used. Examples might include \href{https://jisho.org/word/\%E9\%A0\%81}{\ruby{}{ぺーじ} (page)} and \href{https://jisho.org/word/\%E3\%82\%B3\%E3\%83\%BC\%E3\%83\%92\%E3\%83\%BC}{\ruby{珈琲}{コーヒ}(coffee)}. Because of this, we use a hard limit at 50\% for how many readings are katakana only. \\
\midrule
NHK Easy News Frequency Rating
& 25\%
& \includegraphics[width=\curveDiagramWidth]{graphics/curves/nhk.png}
& In order to get rid of the words that are document specific, we make the S-curve mark the lower valued words as difficult, but quickly remove their difficulty if they appear more often \\
\bottomrule
\end{tabular}
\caption{Contributing factors to a words difficulty}
\label{fig:wordfactors}
\end{figure}
\newpage
\section{Evaluation and conclusion}
\subsection{Evaluation}
Despite being unable to measure the accuracy of the results, the first impression was quite good.
Here is an example of the sentences connected to the word テスト (test)
\begin{figure}[H]
\begin{minipage}{0.49\linewidth}
\includegraphics[width=\linewidth]{graphics/examples/test1.png}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\includegraphics[width=\linewidth]{graphics/examples/test2.png}
\end{minipage}
\caption{Example sentences for the word ``test'', with the easiest and hardest difficulty levels}
\end{figure}
From this example, it seems to work quite well, with the one big exception being the particles. These are small suffixes which only exists to indicate the grammatical meaning of the word before it. While these are probably some of the absolutely most common pieces in the japanese language, they have been marked as very difficult. However, the easier words have been marked green, while the harder ones have gotten an orange color.
Here is another example for \ruby{}{ほん} (book), which has several senses. We have turned on debug information, to see the contributing factors.
\begin{figure}[H]
\begin{minipage}{0.49\linewidth}
\includegraphics[width=\linewidth]{graphics/examples/book1.png}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\includegraphics[width=\linewidth]{graphics/examples/book2.png}
\end{minipage}
\caption{Example sentences for the word ``book'', with the easiest and hardest difficulty levels}
\end{figure}
Here we can see the internal details as to why the particles have been so difficult. They seem to be marked as the most difficult on the kanji scale. This is a bug, since the kanji system is supposed to filter out anything that is not a kanji. Unfortunately, while we spent quite a lot of time on trying to fix this, we can not figure out why it is acting as it does, and we are soon reaching the deadline of the project.
\subsection{Conclusion}
While the system performs well by our random samples, there are still some impurities to be researched further. There are also some bugs left to be fixed.
There are many other factors that we haven't explored yet which could be useful. For example, many sentences in the Tatoeba Corpus are already labeled with tags, some of which could indicate whether a sentence is difficult or not. We could also look to the automatic text difficulty classifier project for additional ideas on which factors to consider.
We also think more research is necessary to establish the correct weighting for different factors and which curves should be used. This requires examining which factors of a word are the most important for determining its level of difficulty. This is crucial for ensuring that the sorting system works correctly. Additionally, we need to investigate how to handle sentences with unfamiliar words to ensure they are sorted in a reasonable way.
\nocite{*}
\customphantomsection{Bibliography}
\printbibliography{}
\end{document}

View File

@ -0,0 +1,196 @@
\NeedsTeXFormat{LaTeX2e}
\ProvidesPackage{ntnu-report}[]
\RequirePackage[dvipsnames]{xcolor}
%%%%%%%%%%%%%%%%%%%%%%%
% IMAGES AND GRAPHICS %
%%%%%%%%%%%%%%%%%%%%%%%
\RequirePackage{graphicx} % Handle images
\RequirePackage{wrapfig} % Wrap text around images
\RequirePackage{float} % Force image location using "H"
%%%%%%%%%%%%%%%%%%
% LANGUAGE/BABEL %
%%%%%%%%%%%%%%%%%%
\RequirePackage[utf8]{inputenc}
\renewcommand*\contentsname{Table of Contents} % Rename table of contents
\renewcommand{\listfigurename}{List of Figures} % Rename list of figures
\renewcommand{\listtablename}{List of Tables} % Rename list of tables
%%%%%%%%%%%%
% HYPERREF %
%%%%%%%%%%%%
\RequirePackage{hyperref} % Hyper-references, possible to change color
\hypersetup{ % Color of hyper-references
colorlinks,
citecolor=black,
filecolor=blue,
linkcolor=black,
urlcolor=blue
}
\RequirePackage{refcount}
\RequirePackage{url}
\RequirePackage{caption}
\RequirePackage{subcaption}
\RequirePackage[nottoc]{tocbibind} % Includes Bibliography, Index, list of Listing etc. to table of contents
\newcommand{\source}[1]{\vspace{-4pt} \caption*{\hfill \footnotesize{Source: {#1}} } } % Easily insert sources in images
\def\equationautorefname{Equation} % Autoref-name for equations
\def\figureautorefname{Figure} % Autoref-name for figures
\def\tableautorefname{Table} % Autoref-name for tables
\def\subsectionautorefname{\sectionautorefname} % Autoref-name for subsections
\def\subsubsectionautorefname{\sectionautorefname} % Autoref-name for subsubsections
%%%%%%%%%%%
% LENGTHS %
%%%%%%%%%%%
\RequirePackage[a4paper, total={150mm, 245mm,footskip = 14mm}]{geometry}
\RequirePackage{parskip}
\setlength{\parskip}{5mm}
\setlength{\parindent}{0mm}
\renewcommand{\baselinestretch}{1.5}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% TITLE, HEADER/FOOTER AND FRONTPAGE %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\RequirePackage{titling}
\RequirePackage{fancyhdr}
\pagestyle{fancy}
\fancyhf{}
\rhead{TDT4130 - Text Analysis Project}
\lhead{\thetitle}
\rfoot{Page \thepage}
\fancypagestyle{frontpage}{
\fancyhf{}
\rhead{}
\lhead{}
\rfoot{}
\renewcommand{\headrulewidth}{0pt}
}
%%%%%%%%%%%%%%%%%%%%
% SUBSUBSUBSECTION %
%%%%%%%%%%%%%%%%%%%%
% https://tex.stackexchange.com/questions/60209/how-to-add-an-extra-level-of-sections-with-headings-below-subsubsection
\RequirePackage{titlesec}
\titleclass{\subsubsubsection}{straight}[\subsection]
\newcounter{subsubsubsection}[subsubsection]
\renewcommand\thesubsubsubsection{\thesubsubsection.\arabic{subsubsubsection}}
\renewcommand\theparagraph{\thesubsubsubsection.\arabic{paragraph}} % optional; useful if paragraphs are to be numbered
\titleformat{\subsubsubsection}
{\normalfont\normalsize\bfseries}{\thesubsubsubsection}{1em}{}
\titlespacing*{\subsubsubsection}
{0pt}{3.25ex plus 1ex minus .2ex}{1.5ex plus .2ex}
\makeatletter
\renewcommand\paragraph{\@startsection{paragraph}{5}{\z@}%
{3.25ex \@plus1ex \@minus.2ex}%
{-1em}%
{\normalfont\normalsize\bfseries}}
\renewcommand\subparagraph{\@startsection{subparagraph}{6}{\parindent}%
{3.25ex \@plus1ex \@minus .2ex}%
{-1em}%
{\normalfont\normalsize\bfseries}}
\def\toclevel@subsubsubsection{4}
\def\toclevel@paragraph{5}
\def\toclevel@paragraph{6}
\def\l@subsubsubsection{\@dottedtocline{4}{7em}{4em}}
\def\l@paragraph{\@dottedtocline{5}{10em}{5em}}
\def\l@subparagraph{\@dottedtocline{6}{14em}{6em}}
\makeatother
\setcounter{secnumdepth}{4}
\setcounter{tocdepth}{4}
%%%%%%%%%%%%%%%%
% BIBLIOGRAPHY %
%%%%%%%%%%%%%%%%
\RequirePackage{csquotes}
\RequirePackage[
style=bath,
natbib=true,
backend=biber,
isbn=true,
url=true,
doi=true,
]{biblatex}
%\bibliographystyle{agsm}
% Removes date from online entries
\DeclareLabeldate[online]{%
\field{date}
\field{year}
\field{eventdate}
\field{origdate}
\field{urldate}
}
%\newcommand{\urlcite}[2]{\textit{#1} \citep{#2}}
\newcommand{\urlcite}[2]{\hyperlink{#2}{#1}}
% \newcommand{\appendixref}[2]{}
%%%%%%%%%%%%%%%%%%%%%%%%
% MISC CUSTOM COMMANDS %
%%%%%%%%%%%%%%%%%%%%%%%%
% \newcommand{\todo}[1]{}
\newcommand{\todo}[1]{ {\color{red}[TODO: #1]} }
\newcommand{\todocite}[1]{ {\color{Green}[CITE: #1]} }
\newcommand{\pageno}{ {\color{DarkOrchid}s.X} }
\newcommand{\?}{ {\color{red}(?)} }
% Needed to add appendices and stuff
\newcommand{\customphantomsection}[1]{
\cleardoublepage
\phantomsection
\setcounter{subsection}{0}
\setcounter{subsubsection}{0}
\addtocounter{section}{1}
\addcontentsline{toc}{section}{\protect\numberline{\thesection} #1}
}
\newcommand{\customphantomsubsection}[1]{
\cleardoublepage
\phantomsection
\addtocounter{subsection}{1}
\setcounter{subsubsection}{0}
\addcontentsline{toc}{subsection}{\protect\numberline{\thesubsection} #1}
}
%%%%%%%%%%
% TABLES %
%%%%%%%%%%
\RequirePackage{longtable}
\RequirePackage{booktabs}
\RequirePackage{tabu}
%%%%%%%%%%%%%%%%%
% MISCELLANEOUS %
%%%%%%%%%%%%%%%%%
\RequirePackage{import}
\RequirePackage{enumitem}
\RequirePackage{tikz}
\RequirePackage{csvsimple}
\RequirePackage{subcaption}
\RequirePackage[export]{adjustbox}
\RequirePackage[final]{pdfpages}

Binary file not shown.

View File

@ -0,0 +1,90 @@
@online{jouyou,
title={常用漢字表の音訓索引},
author={Agency for Cultural Affairs, Government of Japan},
url={https://www.bunka.go.jp/kokugo_nihongo/sisaku/joho/joho/kijun/naikaku/kanji/joyokanjisakuin/index.html},
urldate={2023-04-17}
}
@online{jst,
title={Jisho Study Tool},
author={h7x4},
url={https://github.com/h7x4/Jisho-Study-Tool},
urldate={2023-04-15}
}
@inproceedings{jmdict,
title={JMdict: a Japanese-Multilingual Dictionary},
author={Jim Breen},
year={2004},
url={https://www.edrdg.org/jmdict/jmdictart.html}
}
@inproceedings{tanaka-corpus,
title={Compilation of a multilingual parallel corpus},
author={Yuki Tanaka},
year={2001},
url="https://www.edrdg.org/projects/tanaka/tanaka.pdf"
}
@inproceedings{portuguese,
author = {Curto, Pedro and Mamede, Nuno and Baptista, Jorge},
year = {2015},
month = {01},
pages = {36-44},
title = {Automatic Text Difficulty Classifier - Assisting the Selection Of Adequate Reading Materials For European Portuguese Teaching},
doi = {10.5220/0005428300360044}
}
@online{ve,
url="https://github.com/Kimtaro/ve/blob/master/lib/providers/japanese_transliterators.rb",
title="japanese_transliterators.rb",
author={Kim Ahlstrom},
urldate={2023-04-19}
}
@online{xmldtd,
title = {Prolog and Document Type Declaration},
author = {World wide web consortium},
url="https://www.w3.org/TR/xml11/#sec-prolog-dtd",
urldate = {2023-04-22}
}
@article{swsm,
author="Komiya, Kanako and Sasaki, Yuto and Morita, Hajime and Sasaki, Minoru and Shinnou, Hiroyuki and Kotani, Yoshiyuki",
title="Surrounding Word Sense Model for Japanese All-words Word Sense Disambiguation",
journal="Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation",
year="2015",
pages="35-43",
URL="https://cir.nii.ac.jp/crid/1050282677488198784"
}
@book{jurafsky-23,
author = {Jurafsky, Daniel and Martin, James H.},
title = {Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition},
year = {2000},
isbn = {0130950696},
publisher = {Prentice Hall PTR},
address = {USA},
edition = {1st},
abstract = {From the Publisher:This book takes an empirical approach to language processing, based on applying statistical and other machine-learning algorithms to large corpora. Methodology boxes are included in each chapter. Each chapter is built around one or more worked examples to demonstrate the main idea of the chapter. Covers the fundamental algorithms of various fields, whether originally proposed for spoken or written language to demonstrate how the same algorithm can be used for speech recognition and word-sense disambiguation. Emphasis on web and other practical applications. Emphasis on scientific evaluation. Useful as a reference for professionals in any of the areas of speech and language processing.}
}
@inproceedings{mccann-2020-fugashi,
title = "fugashi, a Tool for Tokenizing {J}apanese in Python",
author = "McCann, Paul",
booktitle = "Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.nlposs-1.7",
pages = "44--51",
abstract = "Recent years have seen an increase in the number of large-scale multilingual NLP projects. However, even in such projects, languages with special processing requirements are often excluded. One such language is Japanese. Japanese is written without spaces, tokenization is non-trivial, and while high quality open source tokenizers exist they can be hard to use and lack English documentation. This paper introduces fugashi, a MeCab wrapper for Python, and gives an introduction to tokenizing Japanese.",
}
@online{jisho,
url="https://jisho.org/about",
urldate={2023-04-20},
title="Jisho.org",
author="Kim Ahlstrom"
}

View File

@ -0,0 +1,124 @@
\definecolor{ntnublue}{RGB}{0,80,158}
\newcommand\titlepagedecoration{%
\begin{tikzpicture}[remember picture,overlay,shorten >= -10pt]
\coordinate (aux1) at ([yshift=-15pt]current page.north east);
\coordinate (aux2) at ([yshift=-410pt]current page.north east);
\coordinate (aux3) at ([xshift=-4.5cm]current page.north east);
\coordinate (aux4) at ([yshift=-150pt]current page.north east);
\begin{scope}[ntnublue!50,line width=12pt,rounded corners=12pt]
\draw
(aux1) -- coordinate (a)
++(225:5) --
++(-45:5.1) coordinate (b);
\draw[shorten <= -10pt]
(aux3) --
(a) --
(aux1);
\draw[opacity=0.6,ntnublue!80,shorten <= -10pt]
(b) --
++(225:2.2) --
++(-45:2.2);
\end{scope}
\draw[ntnublue!90,line width=8pt,rounded corners=8pt,shorten <= -10pt]
(aux4) --
++(225:0.8) --
++(-45:0.8);
\begin{scope}[ntnublue!70,line width=6pt,rounded corners=8pt]
\draw[shorten <= -10pt]
(aux2) --
++(225:3) coordinate[pos=0.45] (c) --
++(-45:3.1);
\draw
(aux2) --
(c) --
++(135:2.5) --
++(45:2.5) --
++(-45:2.5) coordinate[pos=0.3] (d);
\draw
(d) -- +(45:1);
\end{scope}
\end{tikzpicture}%
}
\begin{titlepage}
\newgeometry{margin=0.7in, top=1in, left=1in}
\pagecolor{ntnublue}
\color{white}
\resizebox{0.85\linewidth}{!}{ \Huge Ordering Japanese sentences by difficulty } \\
\makebox[0pt][l]{\rule{1.3\textwidth}{1pt}}
\vspace*{4mm}
\makeatletter
\textsc{\Huge \@title }
\makeatother
\begin{center}
\vfill{}
\vspace*{1cm}
{\huge \textbf{oysteikt}}
\vspace*{1cm}
% {\Large \textbf{Antall ord: \wordcount}}
%
% \vspace*{2cm}
% \includegraphics[width=0.5\linewidth]{ntnu_uten_slagord_hvit}
\includegraphics[width=0.5\linewidth]{graphics/ntnu_uten_slagord_hvit.pdf}
\end{center}
% {\hfill{} \includepdf[width=0.5\linewidth]{ntnu_uten_slagord_hvit.pdf} }
\titlepagedecoration{}
% \noindent
% \includegraphics[width=2cm]{wikilogo.png}\\[-1em]
% \par
% \noindent
% \textbf{\textsf{UniversitätsKlinikum}} \textcolor{namecolor}{\textsf{Heidelberg}}
% \vfill
% \noindent
% {\huge \textsf{Handbuch 1.3}}
% \vskip\baselineskip
% \noindent
% \textsf{August 2008}
\end{titlepage}
\restoregeometry
\nopagecolor% Use this to restore the color pages to white
% ----------------------------------------------------------------
%\begin{centering}
%
% \ntnuTitle
%
% \vspace{3em}
%
% \vspace{1em}
%
%\end{centering}
%
%\vspace{5mm}
%
%\vspace{1.8cm}
\thispagestyle{frontpage}
\newpage{}
\tableofcontents
\newpage{}