73 Commits

Author SHA1 Message Date
e67d906b3c WIP: add kanjivg data
All checks were successful
Build and test / evals (push) Successful in 37m9s
2026-03-03 13:47:59 +09:00
0f7854a4fc migrations: add version tables for all data sources
All checks were successful
Build and test / evals (push) Successful in 11m34s
2026-03-03 12:59:58 +09:00
a86f857553 util/romaji_transliteration: add functions to generate transliteration spans
All checks were successful
Build and test / evals (push) Successful in 18m58s
2026-03-02 18:23:36 +09:00
d14e3909d4 search/filter_kanji: keep order when deduplicating
All checks were successful
Build and test / evals (push) Successful in 13m33s
2026-03-02 17:37:45 +09:00
bb44bf786a tests: move const_data tests to test/const_data
All checks were successful
Build and test / evals (push) Successful in 11m38s
2026-03-02 17:16:14 +09:00
ad3343a01e README: add link to coverage
All checks were successful
Build and test / evals (push) Successful in 13m25s
2026-03-02 15:02:36 +09:00
16d72e94ba WIP: .gitea/workflows: generate coverage
All checks were successful
Build and test / evals (push) Successful in 13m17s
2026-03-02 14:34:08 +09:00
b070a1fd31 .gitea/workflows: merge build and test pipeline 2026-03-02 14:31:59 +09:00
dcf5c8ebe7 lemmatizer: implement equality for AllomorphPattern/LemmatizationRule 2026-03-02 12:01:13 +09:00
1f8bc8bac5 lemmatizer: let LemmatizationRule.validChildClasses be a set 2026-03-02 12:01:13 +09:00
ab28b5788b search/word_search: fix english queries without pageSize/offset 2026-03-02 12:01:13 +09:00
dd7b2917dc flake.nix: add lcov to devshell 2026-03-02 12:01:13 +09:00
74798c77b5 flake.nix: add libsqlite to LD_LIBRARY_PATH in devshell 2026-03-02 12:01:12 +09:00
63a4caa626 lemmatizer/rules/ichidan: add informal conditionals 2026-03-02 12:01:12 +09:00
374be5ca6b lemmatizer: add some basic tests 2026-03-02 12:01:12 +09:00
4a6fd41f31 lemmatizer: misc small improvements 2026-03-02 12:01:12 +09:00
c06fff9e5a lemmatizer/rules: name all rules as separate static variables 2026-03-02 12:01:12 +09:00
1d9928ade1 search/kanji: split queries into separate functions 2026-03-02 12:01:11 +09:00
1a3b04be00 word_search_result: add romanization getters 2026-03-02 12:01:11 +09:00
c0c6f97a01 search/word_search: fix casing of SearchMode variants 2026-03-02 12:01:11 +09:00
a954188d5d Fix a few lints 2026-03-02 12:01:11 +09:00
5b86d6eb67 README: add textual overview of the word search procedure 2026-03-02 12:01:11 +09:00
72f31e974b dart format 2026-03-02 12:01:10 +09:00
e824dc0a22 search/word_search: split data queries into functions 2026-03-02 12:01:10 +09:00
f5bca61839 flake.lock: bump
Some checks failed
Build database / evals (push) Successful in 10m44s
Run tests / evals (push) Failing after 43m13s
2026-02-25 16:28:18 +09:00
056aaaa0ce tests/search_match_inference: add more cases
Some checks failed
Build database / evals (push) Has been cancelled
Run tests / evals (push) Has been cancelled
2026-02-25 12:42:38 +09:00
a696ed9733 Generate matchspans for word search results
Some checks failed
Run tests / evals (push) Failing after 12m29s
Build database / evals (push) Successful in 12m36s
2026-02-24 21:27:12 +09:00
00b963bfed .gitea/workflows/test: init
Some checks failed
Build database / evals (push) Successful in 10m43s
Run tests / evals (push) Failing after 12m27s
2026-02-24 20:43:07 +09:00
4376012f18 pubspec.lock: update deps
All checks were successful
Build database / evals (push) Successful in 10m40s
2026-02-24 18:44:20 +09:00
8ae1d882a0 Add TODO for word matching
All checks were successful
Build database / evals (push) Successful in 12m32s
2026-02-24 15:21:03 +09:00
81db60ccf7 Add some docstrings
Some checks failed
Build database / evals (push) Has been cancelled
2026-02-24 15:13:33 +09:00
f57cc68ef3 search/radicals: deduplicate input radicals before search 2026-02-24 15:08:19 +09:00
48f50628a1 Create empty() factory for word search results
All checks were successful
Build database / evals (push) Successful in 35m56s
2026-02-23 13:01:57 +09:00
1783338b2a nix/database_tool: fix building
All checks were successful
Build database / evals (push) Successful in 10m47s
2026-02-21 00:49:53 +09:00
e92e99922b {flake.lock,pubspec.*}: bump 2026-02-21 00:49:24 +09:00
05b56466e7 tanos-jlpt: fix breaking changes for csv parser 2026-02-21 00:46:24 +09:00
33016ca751 flake.nix: comment out sqlint, currently broken due to dep build failure
All checks were successful
Build database / evals (push) Successful in 12m29s
2026-02-09 14:45:19 +09:00
98d92d370d {flake.lock,pubspec.lock}: bump, source libsqlite via hooks 2026-02-09 14:44:14 +09:00
5252936bdc flake.nix: filter more files from src 2026-02-09 14:40:53 +09:00
ac0cb14bbe flake.lock: bump, pubspec.lock: update inputs
All checks were successful
Build database / evals (push) Successful in 41m44s
2025-12-19 08:34:58 +09:00
49a86f60ea .gitea/workflows: upload db as artifact
Some checks failed
Build database / evals (push) Has been cancelled
2025-12-19 08:27:46 +09:00
9472156feb .gitea/workflows: update actions/checkout: v3 -> v6
All checks were successful
Build database / evals (push) Successful in 12m32s
2025-12-08 18:51:18 +09:00
4fbdba604e .gitea/workflows: run on debian-latest 2025-12-08 18:51:18 +09:00
0cdfa2015e .gitea/workflows: add workflow for building database
All checks were successful
Build database / evals (push) Successful in 15m4s
2025-11-13 16:35:25 +09:00
a9ca9b08a5 flake.lock: bump, pubspec.lock: update inputs 2025-11-13 16:13:51 +09:00
45e8181041 search/kanji: don't transliterate onyomi to katakana 2025-07-30 01:37:26 +02:00
0d3ebc97f5 flake.lock: bump 2025-07-17 00:24:35 +02:00
bb68319527 treewide: add and apply a bunch of lints 2025-07-17 00:24:35 +02:00
2803db9c12 bin/query-word: fix default pagination 2025-07-16 18:32:47 +02:00
93b76ed660 word_search: include data for cross references 2025-07-16 18:32:28 +02:00
29a3a6aafb treewide: dart format 2025-07-16 15:23:04 +02:00
3a2adf0367 pubspec.{yaml,lock}: update deps 2025-07-15 21:32:42 +02:00
eae6e881a7 flake.lock: bump 2025-07-15 21:32:35 +02:00
0a3387e77a search: add function for fetching multiple kanji at once 2025-07-15 00:58:16 +02:00
f30465a33c search: add function for fetching multiple word entries by id at once 2025-07-15 00:52:25 +02:00
d9006a0767 word_search: fix count query 2025-07-13 20:34:39 +02:00
1e1761ab4d pubspec.{yaml,lock}: update deps 2025-07-13 20:15:13 +02:00
37d29fc6ad cli/query_word: add flags for pagination 2025-07-13 20:12:22 +02:00
60898fe9a2 word_search: fix pagination 2025-07-13 20:12:10 +02:00
5049157b02 cli/query_word: add --json flag 2025-07-13 16:27:11 +02:00
1868c6fb41 word_search: don't throw error on empty results 2025-07-09 14:57:19 +02:00
4ee21d98e2 flake.lock: bump 2025-07-08 20:37:16 +02:00
7247af19cb word_search: always order exact matches first 2025-07-07 13:27:50 +02:00
ac7deae608 word_search: remove duplicate results 2025-07-07 12:47:20 +02:00
7978b74f8d lib/{_data_ingestion/search}: store kanjidic onyomi as hiragana 2025-06-25 20:18:28 +02:00
50870f64a0 cli/query_kanji: remove -k flag, use arguments 2025-06-25 20:18:27 +02:00
62d77749e6 cli/query_word: allow querying with jmdict id 2025-06-25 20:18:27 +02:00
80b3610a72 Store type enum as CHAR(1) 2025-06-25 20:18:27 +02:00
54705c3c10 word_search: add TODO 2025-06-24 23:04:47 +02:00
c7134f0d06 flake.nix: filter src 2025-06-24 19:33:10 +02:00
aac9bf69f6 cli/create_db: return an erroneous exit on on error 2025-06-24 19:33:09 +02:00
189d4a95cf test/word_search: cover more functionality 2025-06-24 19:33:09 +02:00
c32775ce7a use ids for \{kanji,reading\}Element tables 2025-06-24 19:33:02 +02:00
98 changed files with 4197 additions and 2610 deletions

View File

@@ -0,0 +1,71 @@
name: "Build and test"
on:
workflow_dispatch:
pull_request:
push:
jobs:
evals:
runs-on: debian-latest
steps:
- uses: actions/checkout@v6
- name: Install sudo
run: apt-get update && apt-get -y install sudo
- name: Install nix
uses: https://github.com/cachix/install-nix-action@v31
with:
extra_nix_config: |
experimental-features = nix-command flakes
show-trace = true
max-jobs = auto
trusted-users = root
experimental-features = nix-command flakes
build-users-group =
- name: Update database inputs
run: |
nix flake update jmdict-src
nix flake update jmdict-with-examples-src
nix flake update radkfile-src
nix flake update kanjidic2-src
- name: Build database
run: nix build .#database -L
- name: Upload database as artifact
uses: actions/upload-artifact@v3
with:
name: jadb-${{ gitea.sha }}.zip
path: result/jadb.sqlite
if-no-files-found: error
retention-days: 15
# Already compressed
compression: 0
- name: Print database statistics
run: nix develop .# --command sqlite3_analyzer result/jadb.sqlite
# TODO: Defer failure of tests until after the coverage report is generated and uploaded.
- name: Run tests
run: nix develop .# --command dart run test --concurrency=1 --coverage-path=coverage/lcov.info
- name: Generate coverage report
run: |
GENHTML_ARGS=(
--current-date="$(date)"
--dark-mode
--output-directory coverage/report
)
nix develop .# --command genhtml "${GENHTML_ARGS[@]}" coverage/lcov.info
- name: Upload coverage report
uses: https://git.pvv.ntnu.no/Projects/rsync-action@v2
with:
source: ./coverage
target: jadb/${{ gitea.ref_name }}/
username: oysteikt
ssh-key: ${{ secrets.OYSTEIKT_GITEA_WEBDOCS_SSH_KEY }}
host: microbel.pvv.ntnu.no
known-hosts: "microbel.pvv.ntnu.no ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEq0yasKP0mH6PI6ypmuzPzMnbHELo9k+YB5yW534aKudKZS65YsHJKQ9vapOtmegrn5MQbCCgrshf+/XwZcjbM="

1
.gitignore vendored
View File

@@ -8,6 +8,7 @@
# Conventional directory for build output.
/doc/
/build/
/coverage/
main.db
# Nix

View File

@@ -1,7 +1,9 @@
# jadb
[![built with nix](https://builtwithnix.org/badge.svg)](https://builtwithnix.org)
[Latest coverage report](https://www.pvv.ntnu.no/~oysteikt/gitea/jadb/main/coverage/report/)
# jadb
An SQLite database containing open source japanese dictionary data combined from several sources
Note that while the license for the code is MIT, the data has various licenses.
@@ -16,3 +18,26 @@ Note that while the license for the code is MIT, the data has various licenses.
| **Tanos JLPT levels:** | https://www.tanos.co.uk/jlpt/ |
| **Kangxi Radicals:** | https://ctext.org/kangxi-zidian |
## Implementation details
### Word search
The word search procedure is currently split into 3 parts:
1. **Entry ID query**:
Use a complex query with various scoring factors to try to get list of
database ids pointing at dictionary entries, sorted by how likely we think this
word is the word that the caller is looking for. The output here is a `List<int>`
2. **Data Query**:
Takes the entry id list from the last search, and performs all queries needed to retrieve
all the dictionary data for those IDs. The result is a struct with a bunch of flattened lists
with data for all the dictionary entries. These lists are sorted by the order that the ids
were provided.
3. **Regrouping**:
Takes the flattened data, and regroups the items into structs with a more "hierarchical" structure.
All data tagged with the same ID will end up in the same struct. Returns a list of these structs.

41
analysis_options.yaml Normal file
View File

@@ -0,0 +1,41 @@
# This file configures the analyzer, which statically analyzes Dart code to
# check for errors, warnings, and lints.
#
# The issues identified by the analyzer are surfaced in the UI of Dart-enabled
# IDEs (https://dart.dev/tools#ides-and-editors). The analyzer can also be
# invoked from the command line by running `flutter analyze`.
# The following line activates a set of recommended lints for Flutter apps,
# packages, and plugins designed to encourage good coding practices.
include:
- package:lints/recommended.yaml
linter:
# The lint rules applied to this project can be customized in the
# section below to disable rules from the `package:flutter_lints/flutter.yaml`
# included above or to enable additional rules. A list of all available lints
# and their documentation is published at https://dart.dev/lints.
#
# Instead of disabling a lint rule for the entire project in the
# section below, it can also be suppressed for a single line of code
# or a specific dart file by using the `// ignore: name_of_lint` and
# `// ignore_for_file: name_of_lint` syntax on the line or in the file
# producing the lint.
rules:
always_declare_return_types: true
annotate_redeclares: true
avoid_print: false
avoid_setters_without_getters: true
avoid_slow_async_io: true
directives_ordering: true
eol_at_end_of_file: true
prefer_const_declarations: true
prefer_contains: true
prefer_final_fields: true
prefer_final_locals: true
prefer_single_quotes: true
use_key_in_widget_constructors: true
use_null_aware_elements: true
# Additional information about this file can be found at
# https://dart.dev/guides/language/analysis-options

View File

@@ -9,7 +9,7 @@ import 'package:jadb/cli/commands/query_word.dart';
Future<void> main(List<String> args) async {
final runner = CommandRunner(
'jadb',
"CLI tool to help creating and testing the jadb database",
'CLI tool to help creating and testing the jadb database',
);
runner.addCommand(CreateDb());

35
flake.lock generated
View File

@@ -3,7 +3,7 @@
"jmdict-src": {
"flake": false,
"locked": {
"narHash": "sha256-84P7r/fFlBnawy6yChrD9WMHmOWcEGWUmoK70N4rdGQ=",
"narHash": "sha256-lh46uougUzBrRhhwa7cOb32j5Jt9/RjBUhlVjwVzsII=",
"type": "file",
"url": "http://ftp.edrdg.org/pub/Nihongo/JMdict_e.gz"
},
@@ -15,7 +15,7 @@
"jmdict-with-examples-src": {
"flake": false,
"locked": {
"narHash": "sha256-PM0sv7VcsCya2Ek02CI7hVwB3Jawn6bICSI+dsJK0yo=",
"narHash": "sha256-5oS2xDyetbuSM6ax3LUjYA3N60x+D3Hg41HEXGFMqLQ=",
"type": "file",
"url": "http://ftp.edrdg.org/pub/Nihongo/JMdict_e_examp.gz"
},
@@ -27,7 +27,7 @@
"kanjidic2-src": {
"flake": false,
"locked": {
"narHash": "sha256-Lc0wUPpuDKuMDv2t87//w3z20RX8SMJI2iIRtUJ8fn0=",
"narHash": "sha256-orSeQqSxhn9TtX3anYtbiMEm7nFkuomGnIKoVIUR2CM=",
"type": "file",
"url": "https://www.edrdg.org/kanjidic/kanjidic2.xml.gz"
},
@@ -36,13 +36,29 @@
"url": "https://www.edrdg.org/kanjidic/kanjidic2.xml.gz"
}
},
"kanjivg-src": {
"flake": false,
"locked": {
"lastModified": 1772352482,
"narHash": "sha256-8EG3Y1daI2B24NELQwU+eXl/7OmWnW/RXMAQSRVLzWw=",
"ref": "refs/heads/master",
"rev": "0b4309cf6d74799b0e4b72940d8267fbe73f72d0",
"revCount": 2212,
"type": "git",
"url": "https://git.pvv.ntnu.no/mugiten/kanjivg.git"
},
"original": {
"type": "git",
"url": "https://git.pvv.ntnu.no/mugiten/kanjivg.git"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1746904237,
"narHash": "sha256-3e+AVBczosP5dCLQmMoMEogM57gmZ2qrVSrmq9aResQ=",
"lastModified": 1771848320,
"narHash": "sha256-0MAd+0mun3K/Ns8JATeHT1sX28faLII5hVLq0L3BdZU=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "d89fc19e405cb2d55ce7cc114356846a0ee5e956",
"rev": "2fc6539b481e1d2569f25f8799236694180c0993",
"type": "github"
},
"original": {
@@ -54,13 +70,13 @@
"radkfile-src": {
"flake": false,
"locked": {
"narHash": "sha256-rO2z5GPt3g6osZOlpyWysmIbRV2Gw4AR4XvngVTHNpk=",
"narHash": "sha256-DHpMUE2Umje8PbzXUCS6pHZeXQ5+WTxbjSkGU3erDHQ=",
"type": "file",
"url": "http://ftp.usf.edu/pub/ftp.monash.edu.au/pub/nihongo/radkfile.gz"
"url": "http://ftp.edrdg.org/pub/Nihongo/radkfile.gz"
},
"original": {
"type": "file",
"url": "http://ftp.usf.edu/pub/ftp.monash.edu.au/pub/nihongo/radkfile.gz"
"url": "http://ftp.edrdg.org/pub/Nihongo/radkfile.gz"
}
},
"root": {
@@ -68,6 +84,7 @@
"jmdict-src": "jmdict-src",
"jmdict-with-examples-src": "jmdict-with-examples-src",
"kanjidic2-src": "kanjidic2-src",
"kanjivg-src": "kanjivg-src",
"nixpkgs": "nixpkgs",
"radkfile-src": "radkfile-src"
}

View File

@@ -16,7 +16,7 @@
};
radkfile-src = {
url = "http://ftp.usf.edu/pub/ftp.monash.edu.au/pub/nihongo/radkfile.gz";
url = "http://ftp.edrdg.org/pub/Nihongo/radkfile.gz";
flake = false;
};
@@ -24,6 +24,11 @@
url = "https://www.edrdg.org/kanjidic/kanjidic2.xml.gz";
flake = false;
};
kanjivg-src = {
url = "git+https://git.pvv.ntnu.no/mugiten/kanjivg.git";
flake = false;
};
};
outputs = {
@@ -32,7 +37,8 @@
jmdict-src,
jmdict-with-examples-src,
radkfile-src,
kanjidic2-src
kanjidic2-src,
kanjivg-src,
}: let
inherit (nixpkgs) lib;
systems = [
@@ -80,15 +86,17 @@
buildInputs = with pkgs; [
dart
gnumake
sqlite-interactive
lcov
sqlite-analyzer
sqlite-interactive
sqlite-web
sqlint
# sqlint
sqlfluff
];
env = {
LIBSQLITE_PATH = "${pkgs.sqlite.out}/lib/libsqlite3.so";
JADB_PATH = "result/jadb.sqlite";
LD_LIBRARY_PATH = lib.makeLibraryPath [ pkgs.sqlite ];
};
};
});
@@ -104,12 +112,30 @@
platforms = lib.platforms.all;
};
src = lib.cleanSource ./.;
src = builtins.filterSource (path: type: let
baseName = baseNameOf (toString path);
in !(lib.any (b: b) [
(!(lib.cleanSourceFilter path type))
(baseName == ".github" && type == "directory")
(baseName == ".gitea" && type == "directory")
(baseName == "nix" && type == "directory")
(baseName == ".envrc" && type == "regular")
(baseName == "flake.lock" && type == "regular")
(baseName == "flake.nix" && type == "regular")
(baseName == ".sqlfluff" && type == "regular")
])) ./.;
in forAllSystems (system: pkgs: {
default = self.packages.${system}.database;
filteredSource = pkgs.runCommandLocal "filtered-source" { } ''
ln -s ${src} $out
'';
jmdict = pkgs.callPackage ./nix/jmdict.nix {
inherit jmdict-src jmdict-with-examples-src edrdgMetadata;
inherit jmdict-src jmdict-with-examples-src edrdgMetadata;
};
radkfile = pkgs.callPackage ./nix/radkfile.nix {

View File

@@ -16,14 +16,15 @@ abstract class Element extends SQLWritable {
this.nf,
});
@override
Map<String, Object?> get sqlValue => {
'reading': reading,
'news': news,
'ichi': ichi,
'spec': spec,
'gai': gai,
'nf': nf,
};
'reading': reading,
'news': news,
'ichi': ichi,
'spec': spec,
'gai': gai,
'nf': nf,
};
}
class KanjiElement extends Element {
@@ -33,26 +34,19 @@ class KanjiElement extends Element {
KanjiElement({
this.info = const [],
required this.orderNum,
required String reading,
int? news,
int? ichi,
int? spec,
int? gai,
int? nf,
}) : super(
reading: reading,
news: news,
ichi: ichi,
spec: spec,
gai: gai,
nf: nf,
);
required super.reading,
super.news,
super.ichi,
super.spec,
super.gai,
super.nf,
});
@override
Map<String, Object?> get sqlValue => {
...super.sqlValue,
'orderNum': orderNum,
};
...super.sqlValue,
'orderNum': orderNum,
};
}
class ReadingElement extends Element {
@@ -66,27 +60,20 @@ class ReadingElement extends Element {
required this.readingDoesNotMatchKanji,
this.info = const [],
this.restrictions = const [],
required String reading,
int? news,
int? ichi,
int? spec,
int? gai,
int? nf,
}) : super(
reading: reading,
news: news,
ichi: ichi,
spec: spec,
gai: gai,
nf: nf,
);
required super.reading,
super.news,
super.ichi,
super.spec,
super.gai,
super.nf,
});
@override
Map<String, Object?> get sqlValue => {
...super.sqlValue,
'orderNum': orderNum,
'readingDoesNotMatchKanji': readingDoesNotMatchKanji,
};
...super.sqlValue,
'orderNum': orderNum,
'readingDoesNotMatchKanji': readingDoesNotMatchKanji,
};
}
class LanguageSource extends SQLWritable {
@@ -104,11 +91,11 @@ class LanguageSource extends SQLWritable {
@override
Map<String, Object?> get sqlValue => {
'language': language,
'phrase': phrase,
'fullyDescribesSense': fullyDescribesSense,
'constructedFromSmallerWords': constructedFromSmallerWords,
};
'language': language,
'phrase': phrase,
'fullyDescribesSense': fullyDescribesSense,
'constructedFromSmallerWords': constructedFromSmallerWords,
};
}
class Glossary extends SQLWritable {
@@ -116,48 +103,41 @@ class Glossary extends SQLWritable {
final String phrase;
final String? type;
const Glossary({
required this.language,
required this.phrase,
this.type,
});
const Glossary({required this.language, required this.phrase, this.type});
@override
Map<String, Object?> get sqlValue => {
'language': language,
'phrase': phrase,
'type': type,
};
'language': language,
'phrase': phrase,
'type': type,
};
}
final kanaRegex =
RegExp(r'^[\p{Script=Katakana}\p{Script=Hiragana}ー]+$', unicode: true);
final kanaRegex = RegExp(
r'^[\p{Script=Katakana}\p{Script=Hiragana}ー]+$',
unicode: true,
);
class XRefParts {
final String? kanjiRef;
final String? readingRef;
final int? senseOrderNum;
const XRefParts({
this.kanjiRef,
this.readingRef,
this.senseOrderNum,
}) : assert(kanjiRef != null || readingRef != null);
const XRefParts({this.kanjiRef, this.readingRef, this.senseOrderNum})
: assert(kanjiRef != null || readingRef != null);
Map<String, Object?> toJson() => {
'kanjiRef': kanjiRef,
'readingRef': readingRef,
'senseOrderNum': senseOrderNum,
};
'kanjiRef': kanjiRef,
'readingRef': readingRef,
'senseOrderNum': senseOrderNum,
};
}
class XRef {
final String entryId;
final String reading;
const XRef({
required this.entryId,
required this.reading,
});
const XRef({required this.entryId, required this.reading});
}
class Sense extends SQLWritable {
@@ -193,9 +173,9 @@ class Sense extends SQLWritable {
@override
Map<String, Object?> get sqlValue => {
'senseId': senseId,
'orderNum': orderNum,
};
'senseId': senseId,
'orderNum': orderNum,
};
bool get isEmpty =>
antonyms.isEmpty &&
@@ -224,5 +204,6 @@ class Entry extends SQLWritable {
required this.senses,
});
@override
Map<String, Object?> get sqlValue => {'entryId': entryId};
}

View File

@@ -18,18 +18,20 @@ ResolvedXref resolveXref(
XRefParts xref,
) {
List<Entry> candidateEntries = switch ((xref.kanjiRef, xref.readingRef)) {
(null, null) =>
throw Exception('Xref $xref has no kanji or reading reference'),
(String k, null) => entriesByKanji[k]!.toList(),
(null, String r) => entriesByReading[r]!.toList(),
(String k, String r) =>
(null, null) => throw Exception(
'Xref $xref has no kanji or reading reference',
),
(final String k, null) => entriesByKanji[k]!.toList(),
(null, final String r) => entriesByReading[r]!.toList(),
(final String k, final String r) =>
entriesByKanji[k]!.intersection(entriesByReading[r]!).toList(),
};
// Filter out entries that don't have the number of senses specified in the xref
if (xref.senseOrderNum != null) {
candidateEntries
.retainWhere((entry) => entry.senses.length >= xref.senseOrderNum!);
candidateEntries.retainWhere(
(entry) => entry.senses.length >= xref.senseOrderNum!,
);
}
// If the xref has a reading ref but no kanji ref, and there are multiple
@@ -38,8 +40,9 @@ ResolvedXref resolveXref(
if (xref.kanjiRef == null &&
xref.readingRef != null &&
candidateEntries.length > 1) {
final candidatesWithEmptyKanji =
candidateEntries.where((entry) => entry.kanji.length == 0).toList();
final candidatesWithEmptyKanji = candidateEntries
.where((entry) => entry.kanji.isEmpty)
.toList();
if (candidatesWithEmptyKanji.isNotEmpty) {
candidateEntries = candidatesWithEmptyKanji;
@@ -50,7 +53,7 @@ ResolvedXref resolveXref(
// entry in case there are multiple candidates left.
candidateEntries.sortBy<num>((entry) => entry.senses.length);
if (candidateEntries.length == 0) {
if (candidateEntries.isEmpty) {
throw Exception(
'SKIPPING: Xref $xref has ${candidateEntries.length} entries, '
'kanjiRef: ${xref.kanjiRef}, readingRef: ${xref.readingRef}, '
@@ -72,51 +75,43 @@ Future<void> seedJMDictData(List<Entry> entries, Database db) async {
print(' [JMdict] Batch 1 - Kanji and readings');
Batch b = db.batch();
int elementId = 0;
for (final e in entries) {
b.insert(JMdictTableNames.entry, e.sqlValue);
for (final k in e.kanji) {
elementId++;
b.insert(
JMdictTableNames.kanjiElement,
k.sqlValue..addAll({'entryId': e.entryId}),
k.sqlValue..addAll({'entryId': e.entryId, 'elementId': elementId}),
);
for (final i in k.info) {
b.insert(
JMdictTableNames.kanjiInfo,
{
'entryId': e.entryId,
'reading': k.reading,
'info': i,
},
);
b.insert(JMdictTableNames.kanjiInfo, {
'elementId': elementId,
'info': i,
});
}
}
for (final r in e.readings) {
elementId++;
b.insert(
JMdictTableNames.readingElement,
r.sqlValue..addAll({'entryId': e.entryId}),
r.sqlValue..addAll({'entryId': e.entryId, 'elementId': elementId}),
);
for (final i in r.info) {
b.insert(
JMdictTableNames.readingInfo,
{
'entryId': e.entryId,
'reading': r.reading,
'info': i,
},
);
b.insert(JMdictTableNames.readingInfo, {
'elementId': elementId,
'info': i,
});
}
for (final res in r.restrictions) {
b.insert(
JMdictTableNames.readingRestriction,
{
'entryId': e.entryId,
'reading': r.reading,
'restriction': res,
},
);
b.insert(JMdictTableNames.readingRestriction, {
'elementId': elementId,
'restriction': res,
});
}
}
}
@@ -129,16 +124,20 @@ Future<void> seedJMDictData(List<Entry> entries, Database db) async {
for (final e in entries) {
for (final s in e.senses) {
b.insert(
JMdictTableNames.sense, s.sqlValue..addAll({'entryId': e.entryId}));
JMdictTableNames.sense,
s.sqlValue..addAll({'entryId': e.entryId}),
);
for (final d in s.dialects) {
b.insert(
JMdictTableNames.senseDialect,
{'senseId': s.senseId, 'dialect': d},
);
b.insert(JMdictTableNames.senseDialect, {
'senseId': s.senseId,
'dialect': d,
});
}
for (final f in s.fields) {
b.insert(
JMdictTableNames.senseField, {'senseId': s.senseId, 'field': f});
b.insert(JMdictTableNames.senseField, {
'senseId': s.senseId,
'field': f,
});
}
for (final i in s.info) {
b.insert(JMdictTableNames.senseInfo, {'senseId': s.senseId, 'info': i});
@@ -150,16 +149,18 @@ Future<void> seedJMDictData(List<Entry> entries, Database db) async {
b.insert(JMdictTableNames.sensePOS, {'senseId': s.senseId, 'pos': p});
}
for (final rk in s.restrictedToKanji) {
b.insert(
JMdictTableNames.senseRestrictedToKanji,
{'entryId': e.entryId, 'senseId': s.senseId, 'kanji': rk},
);
b.insert(JMdictTableNames.senseRestrictedToKanji, {
'entryId': e.entryId,
'senseId': s.senseId,
'kanji': rk,
});
}
for (final rr in s.restrictedToReading) {
b.insert(
JMdictTableNames.senseRestrictedToReading,
{'entryId': e.entryId, 'senseId': s.senseId, 'reading': rr},
);
b.insert(JMdictTableNames.senseRestrictedToReading, {
'entryId': e.entryId,
'senseId': s.senseId,
'reading': rr,
});
}
for (final ls in s.languageSource) {
b.insert(
@@ -179,7 +180,7 @@ Future<void> seedJMDictData(List<Entry> entries, Database db) async {
await b.commit(noResult: true);
print(' [JMdict] Building xref trees');
SplayTreeMap<String, Set<Entry>> entriesByKanji = SplayTreeMap();
final SplayTreeMap<String, Set<Entry>> entriesByKanji = SplayTreeMap();
for (final entry in entries) {
for (final kanji in entry.kanji) {
@@ -190,7 +191,7 @@ Future<void> seedJMDictData(List<Entry> entries, Database db) async {
}
}
}
SplayTreeMap<String, Set<Entry>> entriesByReading = SplayTreeMap();
final SplayTreeMap<String, Set<Entry>> entriesByReading = SplayTreeMap();
for (final entry in entries) {
for (final reading in entry.readings) {
if (entriesByReading.containsKey(reading.reading)) {
@@ -213,17 +214,14 @@ Future<void> seedJMDictData(List<Entry> entries, Database db) async {
xref,
);
b.insert(
JMdictTableNames.senseSeeAlso,
{
'senseId': s.senseId,
'xrefEntryId': resolvedEntry.entry.entryId,
'seeAlsoKanji': xref.kanjiRef,
'seeAlsoReading': xref.readingRef,
'seeAlsoSense': xref.senseOrderNum,
'ambiguous': resolvedEntry.ambiguous,
},
);
b.insert(JMdictTableNames.senseSeeAlso, {
'senseId': s.senseId,
'xrefEntryId': resolvedEntry.entry.entryId,
'seeAlsoKanji': xref.kanjiRef,
'seeAlsoReading': xref.readingRef,
'seeAlsoSense': xref.senseOrderNum,
'ambiguous': resolvedEntry.ambiguous,
});
}
for (final ant in s.antonyms) {

View File

@@ -8,15 +8,17 @@ List<int?> getPriorityValues(XmlElement e, String prefix) {
int? news, ichi, spec, gai, nf;
for (final pri in e.findElements('${prefix}_pri')) {
final txt = pri.innerText;
if (txt.startsWith('news'))
if (txt.startsWith('news')) {
news = int.parse(txt.substring(4));
else if (txt.startsWith('ichi'))
} else if (txt.startsWith('ichi')) {
ichi = int.parse(txt.substring(4));
else if (txt.startsWith('spec'))
} else if (txt.startsWith('spec')) {
spec = int.parse(txt.substring(4));
else if (txt.startsWith('gai'))
} else if (txt.startsWith('gai')) {
gai = int.parse(txt.substring(3));
else if (txt.startsWith('nf')) nf = int.parse(txt.substring(2));
} else if (txt.startsWith('nf')) {
nf = int.parse(txt.substring(2));
}
}
return [news, ichi, spec, gai, nf];
}
@@ -46,10 +48,7 @@ XRefParts parseXrefParts(String s) {
);
}
} else {
result = XRefParts(
kanjiRef: parts[0],
readingRef: parts[1],
);
result = XRefParts(kanjiRef: parts[0], readingRef: parts[1]);
}
break;
@@ -81,45 +80,48 @@ List<Entry> parseJMDictData(XmlElement root) {
final List<ReadingElement> readingEls = [];
final List<Sense> senses = [];
for (final (kanjiNum, k_ele) in entry.findElements('k_ele').indexed) {
final ke_pri = getPriorityValues(k_ele, 'ke');
for (final (kanjiNum, kEle) in entry.findElements('k_ele').indexed) {
final kePri = getPriorityValues(kEle, 'ke');
kanjiEls.add(
KanjiElement(
orderNum: kanjiNum + 1,
info: k_ele
info: kEle
.findElements('ke_inf')
.map((e) => e.innerText.substring(1, e.innerText.length - 1))
.toList(),
reading: k_ele.findElements('keb').first.innerText,
news: ke_pri[0],
ichi: ke_pri[1],
spec: ke_pri[2],
gai: ke_pri[3],
nf: ke_pri[4],
reading: kEle.findElements('keb').first.innerText,
news: kePri[0],
ichi: kePri[1],
spec: kePri[2],
gai: kePri[3],
nf: kePri[4],
),
);
}
for (final (orderNum, r_ele) in entry.findElements('r_ele').indexed) {
final re_pri = getPriorityValues(r_ele, 're');
final readingDoesNotMatchKanji =
r_ele.findElements('re_nokanji').isNotEmpty;
for (final (orderNum, rEle) in entry.findElements('r_ele').indexed) {
final rePri = getPriorityValues(rEle, 're');
final readingDoesNotMatchKanji = rEle
.findElements('re_nokanji')
.isNotEmpty;
readingEls.add(
ReadingElement(
orderNum: orderNum + 1,
readingDoesNotMatchKanji: readingDoesNotMatchKanji,
info: r_ele
info: rEle
.findElements('re_inf')
.map((e) => e.innerText.substring(1, e.innerText.length - 1))
.toList(),
restrictions:
r_ele.findElements('re_restr').map((e) => e.innerText).toList(),
reading: r_ele.findElements('reb').first.innerText,
news: re_pri[0],
ichi: re_pri[1],
spec: re_pri[2],
gai: re_pri[3],
nf: re_pri[4],
restrictions: rEle
.findElements('re_restr')
.map((e) => e.innerText)
.toList(),
reading: rEle.findElements('reb').first.innerText,
news: rePri[0],
ichi: rePri[1],
spec: rePri[2],
gai: rePri[3],
nf: rePri[4],
),
);
}
@@ -129,10 +131,14 @@ List<Entry> parseJMDictData(XmlElement root) {
final result = Sense(
senseId: senseId,
orderNum: orderNum + 1,
restrictedToKanji:
sense.findElements('stagk').map((e) => e.innerText).toList(),
restrictedToReading:
sense.findElements('stagr').map((e) => e.innerText).toList(),
restrictedToKanji: sense
.findElements('stagk')
.map((e) => e.innerText)
.toList(),
restrictedToReading: sense
.findElements('stagr')
.map((e) => e.innerText)
.toList(),
pos: sense
.findElements('pos')
.map((e) => e.innerText.substring(1, e.innerText.length - 1))

View File

@@ -13,42 +13,33 @@ class CodePoint extends SQLWritable {
@override
Map<String, Object?> get sqlValue => {
'kanji': kanji,
'type': type,
'codepoint': codepoint,
};
'kanji': kanji,
'type': type,
'codepoint': codepoint,
};
}
class Radical extends SQLWritable {
final String kanji;
final int radicalId;
const Radical({
required this.kanji,
required this.radicalId,
});
const Radical({required this.kanji, required this.radicalId});
@override
Map<String, Object?> get sqlValue => {
'kanji': kanji,
'radicalId': radicalId,
};
Map<String, Object?> get sqlValue => {'kanji': kanji, 'radicalId': radicalId};
}
class StrokeMiscount extends SQLWritable {
final String kanji;
final int strokeCount;
const StrokeMiscount({
required this.kanji,
required this.strokeCount,
});
const StrokeMiscount({required this.kanji, required this.strokeCount});
@override
Map<String, Object?> get sqlValue => {
'kanji': kanji,
'strokeCount': strokeCount,
};
'kanji': kanji,
'strokeCount': strokeCount,
};
}
class Variant extends SQLWritable {
@@ -64,10 +55,10 @@ class Variant extends SQLWritable {
@override
Map<String, Object?> get sqlValue => {
'kanji': kanji,
'type': type,
'variant': variant,
};
'kanji': kanji,
'type': type,
'variant': variant,
};
}
class DictionaryReference extends SQLWritable {
@@ -83,10 +74,10 @@ class DictionaryReference extends SQLWritable {
@override
Map<String, Object?> get sqlValue => {
'kanji': kanji,
'type': type,
'ref': ref,
};
'kanji': kanji,
'type': type,
'ref': ref,
};
}
class DictionaryReferenceMoro extends SQLWritable {
@@ -104,11 +95,11 @@ class DictionaryReferenceMoro extends SQLWritable {
@override
Map<String, Object?> get sqlValue => {
'kanji': kanji,
'ref': ref,
'volume': volume,
'page': page,
};
'kanji': kanji,
'ref': ref,
'volume': volume,
'page': page,
};
}
class QueryCode extends SQLWritable {
@@ -126,11 +117,11 @@ class QueryCode extends SQLWritable {
@override
Map<String, Object?> get sqlValue => {
'kanji': kanji,
'code': code,
'type': type,
'skipMisclassification': skipMisclassification,
};
'kanji': kanji,
'code': code,
'type': type,
'skipMisclassification': skipMisclassification,
};
}
class Reading extends SQLWritable {
@@ -146,10 +137,10 @@ class Reading extends SQLWritable {
@override
Map<String, Object?> get sqlValue => {
'kanji': kanji,
'type': type,
'reading': reading,
};
'kanji': kanji,
'type': type,
'reading': reading,
};
}
class Kunyomi extends SQLWritable {
@@ -165,10 +156,10 @@ class Kunyomi extends SQLWritable {
@override
Map<String, Object?> get sqlValue => {
'kanji': kanji,
'yomi': yomi,
'isJouyou': isJouyou,
};
'kanji': kanji,
'yomi': yomi,
'isJouyou': isJouyou,
};
}
class Onyomi extends SQLWritable {
@@ -186,11 +177,11 @@ class Onyomi extends SQLWritable {
@override
Map<String, Object?> get sqlValue => {
'kanji': kanji,
'yomi': yomi,
'isJouyou': isJouyou,
'type': type,
};
'kanji': kanji,
'yomi': yomi,
'isJouyou': isJouyou,
'type': type,
};
}
class Meaning extends SQLWritable {
@@ -206,10 +197,10 @@ class Meaning extends SQLWritable {
@override
Map<String, Object?> get sqlValue => {
'kanji': kanji,
'language': language,
'meaning': meaning,
};
'kanji': kanji,
'language': language,
'meaning': meaning,
};
}
class Character extends SQLWritable {
@@ -254,11 +245,12 @@ class Character extends SQLWritable {
this.nanori = const [],
});
@override
Map<String, Object?> get sqlValue => {
'literal': literal,
'grade': grade,
'strokeCount': strokeCount,
'frequency': frequency,
'jlpt': jlpt,
};
'literal': literal,
'grade': grade,
'strokeCount': strokeCount,
'frequency': frequency,
'jlpt': jlpt,
};
}

View File

@@ -19,10 +19,7 @@ Future<void> seedKANJIDICData(List<Character> characters, Database db) async {
assert(c.radical != null, 'Radical name without radical');
b.insert(
KANJIDICTableNames.radicalName,
{
'radicalId': c.radical!.radicalId,
'name': n,
},
{'radicalId': c.radical!.radicalId, 'name': n},
conflictAlgorithm: ConflictAlgorithm.ignore,
);
}
@@ -34,13 +31,10 @@ Future<void> seedKANJIDICData(List<Character> characters, Database db) async {
b.insert(KANJIDICTableNames.radical, c.radical!.sqlValue);
}
for (final sm in c.strokeMiscounts) {
b.insert(
KANJIDICTableNames.strokeMiscount,
{
'kanji': c.literal,
'strokeCount': sm,
},
);
b.insert(KANJIDICTableNames.strokeMiscount, {
'kanji': c.literal,
'strokeCount': sm,
});
}
for (final v in c.variants) {
b.insert(KANJIDICTableNames.variant, v.sqlValue);
@@ -64,24 +58,24 @@ Future<void> seedKANJIDICData(List<Character> characters, Database db) async {
}
for (final (i, y) in c.kunyomi.indexed) {
b.insert(
KANJIDICTableNames.kunyomi, y.sqlValue..addAll({'orderNum': i + 1}));
KANJIDICTableNames.kunyomi,
y.sqlValue..addAll({'orderNum': i + 1}),
);
}
for (final (i, y) in c.onyomi.indexed) {
b.insert(
KANJIDICTableNames.onyomi, y.sqlValue..addAll({'orderNum': i + 1}));
KANJIDICTableNames.onyomi,
y.sqlValue..addAll({'orderNum': i + 1}),
);
}
for (final (i, m) in c.meanings.indexed) {
b.insert(
KANJIDICTableNames.meaning, m.sqlValue..addAll({'orderNum': i + 1}));
KANJIDICTableNames.meaning,
m.sqlValue..addAll({'orderNum': i + 1}),
);
}
for (final n in c.nanori) {
b.insert(
KANJIDICTableNames.nanori,
{
'kanji': c.literal,
'nanori': n,
},
);
b.insert(KANJIDICTableNames.nanori, {'kanji': c.literal, 'nanori': n});
}
}
await b.commit(noResult: true);

View File

@@ -1,4 +1,5 @@
import 'package:jadb/_data_ingestion/kanjidic/objects.dart';
import 'package:jadb/util/romaji_transliteration.dart';
import 'package:xml/xml.dart';
List<Character> parseKANJIDICData(XmlElement root) {
@@ -9,27 +10,33 @@ List<Character> parseKANJIDICData(XmlElement root) {
final codepoint = c.findElements('codepoint').firstOrNull;
final radical = c.findElements('radical').firstOrNull;
final misc = c.findElements('misc').first;
final dic_number = c.findElements('dic_number').firstOrNull;
final query_code = c.findElements('query_code').first;
final reading_meaning = c.findElements('reading_meaning').firstOrNull;
final dicNumber = c.findElements('dic_number').firstOrNull;
final queryCode = c.findElements('query_code').first;
final readingMeaning = c.findElements('reading_meaning').firstOrNull;
// TODO: Group readings and meanings by their rmgroup parent node.
result.add(
Character(
literal: kanji,
strokeCount:
int.parse(misc.findElements('stroke_count').first.innerText),
strokeCount: int.parse(
misc.findElements('stroke_count').first.innerText,
),
grade: int.tryParse(
misc.findElements('grade').firstOrNull?.innerText ?? ''),
misc.findElements('grade').firstOrNull?.innerText ?? '',
),
frequency: int.tryParse(
misc.findElements('freq').firstOrNull?.innerText ?? ''),
misc.findElements('freq').firstOrNull?.innerText ?? '',
),
jlpt: int.tryParse(
misc.findElements('jlpt').firstOrNull?.innerText ?? '',
),
radicalName:
misc.findElements('rad_name').map((e) => e.innerText).toList(),
codepoints: codepoint
radicalName: misc
.findElements('rad_name')
.map((e) => e.innerText)
.toList(),
codepoints:
codepoint
?.findElements('cp_value')
.map(
(e) => CodePoint(
@@ -44,10 +51,7 @@ List<Character> parseKANJIDICData(XmlElement root) {
?.findElements('rad_value')
.where((e) => e.getAttribute('rad_type') == 'classical')
.map(
(e) => Radical(
kanji: kanji,
radicalId: int.parse(e.innerText),
),
(e) => Radical(kanji: kanji, radicalId: int.parse(e.innerText)),
)
.firstOrNull,
strokeMiscounts: misc
@@ -65,7 +69,8 @@ List<Character> parseKANJIDICData(XmlElement root) {
),
)
.toList(),
dictionaryReferences: dic_number
dictionaryReferences:
dicNumber
?.findElements('dic_ref')
.where((e) => e.getAttribute('dr_type') != 'moro')
.map(
@@ -77,7 +82,8 @@ List<Character> parseKANJIDICData(XmlElement root) {
)
.toList() ??
[],
dictionaryReferencesMoro: dic_number
dictionaryReferencesMoro:
dicNumber
?.findElements('dic_ref')
.where((e) => e.getAttribute('dr_type') == 'moro')
.map(
@@ -90,7 +96,7 @@ List<Character> parseKANJIDICData(XmlElement root) {
)
.toList() ??
[],
querycodes: query_code
querycodes: queryCode
.findElements('q_code')
.map(
(e) => QueryCode(
@@ -101,7 +107,8 @@ List<Character> parseKANJIDICData(XmlElement root) {
),
)
.toList(),
readings: reading_meaning
readings:
readingMeaning
?.findAllElements('reading')
.where(
(e) =>
@@ -116,7 +123,8 @@ List<Character> parseKANJIDICData(XmlElement root) {
)
.toList() ??
[],
kunyomi: reading_meaning
kunyomi:
readingMeaning
?.findAllElements('reading')
.where((e) => e.getAttribute('r_type') == 'ja_kun')
.map(
@@ -128,19 +136,22 @@ List<Character> parseKANJIDICData(XmlElement root) {
)
.toList() ??
[],
onyomi: reading_meaning
onyomi:
readingMeaning
?.findAllElements('reading')
.where((e) => e.getAttribute('r_type') == 'ja_on')
.map(
(e) => Onyomi(
kanji: kanji,
yomi: e.innerText,
isJouyou: e.getAttribute('r_status') == 'jy',
type: e.getAttribute('on_type')),
kanji: kanji,
yomi: transliterateKatakanaToHiragana(e.innerText),
isJouyou: e.getAttribute('r_status') == 'jy',
type: e.getAttribute('on_type'),
),
)
.toList() ??
[],
meanings: reading_meaning
meanings:
readingMeaning
?.findAllElements('meaning')
.map(
(e) => Meaning(
@@ -151,7 +162,8 @@ List<Character> parseKANJIDICData(XmlElement root) {
)
.toList() ??
[],
nanori: reading_meaning
nanori:
readingMeaning
?.findElements('nanori')
.map((e) => e.innerText)
.toList() ??

View File

@@ -0,0 +1,92 @@
import 'package:jadb/_data_ingestion/sql_writable.dart';
/// Enum set in the kvg:position attribute, used by `<g>` elements in the KanjiVG SVG files.
enum KanjiPathGroupPosition {
bottom,
kamae,
kamaec,
left,
middle,
nyo,
nyoc,
right,
tare,
tarec,
top,
}
/// Contents of a \<g> element in the KanjiVG SVG files.
class KanjiPathGroupTreeNode extends SQLWritable {
final String id;
final List<KanjiPathGroupTreeNode> children;
final String? element;
final String? original;
final KanjiPathGroupPosition? position;
final String? radical;
final int? part;
KanjiPathGroupTreeNode({
required this.id,
this.children = const [],
this.element,
this.original,
this.position,
this.radical,
this.part,
});
@override
Map<String, Object?> get sqlValue => {
'id': id,
'element': element,
'original': original,
'position': position?.name,
'radical': radical,
'part': part,
};
}
/// Contents of a `<text>` element in the StrokeNumber's group in the KanjiVG SVG files
class KanjiStrokeNumber extends SQLWritable {
final int num;
final double x;
final double y;
KanjiStrokeNumber(this.num, this.x, this.y);
@override
Map<String, Object?> get sqlValue => {'num': num, 'x': x, 'y': y};
}
/// Contents of a `<path>` element in the KanjiVG SVG files
class KanjiVGPath extends SQLWritable {
final String id;
final String type;
final String svgPath;
KanjiVGPath({required this.id, required this.type, required this.svgPath});
@override
Map<String, Object?> get sqlValue => {
'id': id,
'type': type,
'svgPath': svgPath,
};
}
class KanjiVGItem extends SQLWritable {
final String character;
final List<KanjiVGPath> paths;
final List<KanjiStrokeNumber> strokeNumbers;
final List<KanjiPathGroupTreeNode> pathGroups;
KanjiVGItem({
required this.character,
required this.paths,
required this.strokeNumbers,
required this.pathGroups,
});
@override
Map<String, Object?> get sqlValue => {'character': character};
}

View File

@@ -0,0 +1,7 @@
import 'package:sqflite_common/sqflite.dart';
Future<void> seedKanjiVGData(Iterable<String> xmlContents, Database db) async {
final b = db.batch();
await b.commit(noResult: true);
}

View File

@@ -1,9 +1,7 @@
import 'dart:ffi';
import 'dart:io';
import 'package:jadb/search.dart';
import 'package:sqflite_common_ffi/sqflite_ffi.dart';
import 'package:sqlite3/open.dart';
Future<Database> openLocalDb({
String? libsqlitePath,
@@ -12,38 +10,23 @@ Future<Database> openLocalDb({
bool verifyTablesExist = true,
bool walMode = false,
}) async {
libsqlitePath ??= Platform.environment['LIBSQLITE_PATH'];
jadbPath ??= Platform.environment['JADB_PATH'];
jadbPath ??= Directory.current.uri.resolve('jadb.sqlite').path;
libsqlitePath = (libsqlitePath == null)
? null
: File(libsqlitePath).resolveSymbolicLinksSync();
jadbPath = File(jadbPath).resolveSymbolicLinksSync();
if (libsqlitePath == null) {
throw Exception("LIBSQLITE_PATH is not set");
}
if (!File(libsqlitePath).existsSync()) {
throw Exception("LIBSQLITE_PATH does not exist: $libsqlitePath");
}
if (!File(jadbPath).existsSync()) {
throw Exception("JADB_PATH does not exist: $jadbPath");
throw Exception('JADB_PATH does not exist: $jadbPath');
}
final db = await createDatabaseFactoryFfi(
ffiInit: () =>
open.overrideForAll(() => DynamicLibrary.open(libsqlitePath!)),
).openDatabase(
final db = await createDatabaseFactoryFfi().openDatabase(
jadbPath,
options: OpenDatabaseOptions(
onConfigure: (db) async {
if (walMode) {
await db.execute("PRAGMA journal_mode = WAL");
await db.execute('PRAGMA journal_mode = WAL');
}
await db.execute("PRAGMA foreign_keys = ON");
await db.execute('PRAGMA foreign_keys = ON');
},
readOnly: !readWrite,
),

View File

@@ -3,8 +3,10 @@ import 'dart:io';
Iterable<String> parseRADKFILEBlocks(File radkfile) {
final String content = File('data/tmp/radkfile_utf8').readAsStringSync();
final Iterable<String> blocks =
content.replaceAll(RegExp(r'^#.*$'), '').split(r'$').skip(2);
final Iterable<String> blocks = content
.replaceAll(RegExp(r'^#.*$'), '')
.split(r'$')
.skip(2);
return blocks;
}

View File

@@ -1,27 +1,20 @@
import 'package:jadb/table_names/radkfile.dart';
import 'package:sqflite_common/sqlite_api.dart';
Future<void> seedRADKFILEData(
Iterable<String> blocks,
Database db,
) async {
Future<void> seedRADKFILEData(Iterable<String> blocks, Database db) async {
final b = db.batch();
for (final block in blocks) {
final String radical = block[1];
final List<String> kanjiList = block
.replaceFirst(RegExp(r'.*\n'), '')
.split('')
..removeWhere((e) => e == '' || e == '\n');
final List<String> kanjiList =
block.replaceFirst(RegExp(r'.*\n'), '').split('')
..removeWhere((e) => e == '' || e == '\n');
for (final kanji in kanjiList.toSet()) {
b.insert(
RADKFILETableNames.radkfile,
{
'radical': radical,
'kanji': kanji,
},
);
b.insert(RADKFILETableNames.radkfile, {
'radical': radical,
'kanji': kanji,
});
}
}

View File

@@ -24,10 +24,10 @@ Future<void> seedData(Database db) async {
Future<void> parseAndSeedDataFromJMdict(Database db) async {
print('[JMdict] Reading file content...');
String rawXML = File('data/tmp/JMdict.xml').readAsStringSync();
final String rawXML = File('data/tmp/JMdict.xml').readAsStringSync();
print('[JMdict] Parsing XML tags...');
XmlElement root = XmlDocument.parse(rawXML).getElement('JMdict')!;
final XmlElement root = XmlDocument.parse(rawXML).getElement('JMdict')!;
print('[JMdict] Parsing XML content...');
final entries = parseJMDictData(root);
@@ -38,10 +38,10 @@ Future<void> parseAndSeedDataFromJMdict(Database db) async {
Future<void> parseAndSeedDataFromKANJIDIC(Database db) async {
print('[KANJIDIC2] Reading file...');
String rawXML = File('data/tmp/kanjidic2.xml').readAsStringSync();
final String rawXML = File('data/tmp/kanjidic2.xml').readAsStringSync();
print('[KANJIDIC2] Parsing XML...');
XmlElement root = XmlDocument.parse(rawXML).getElement('kanjidic2')!;
final XmlElement root = XmlDocument.parse(rawXML).getElement('kanjidic2')!;
print('[KANJIDIC2] Parsing XML content...');
final entries = parseKANJIDICData(root);
@@ -52,7 +52,7 @@ Future<void> parseAndSeedDataFromKANJIDIC(Database db) async {
Future<void> parseAndSeedDataFromRADKFILE(Database db) async {
print('[RADKFILE] Reading file...');
File raw = File('data/tmp/RADKFILE');
final File raw = File('data/tmp/RADKFILE');
print('[RADKFILE] Parsing content...');
final blocks = parseRADKFILEBlocks(raw);
@@ -63,7 +63,7 @@ Future<void> parseAndSeedDataFromRADKFILE(Database db) async {
Future<void> parseAndSeedDataFromTanosJLPT(Database db) async {
print('[TANOS-JLPT] Reading files...');
Map<String, File> files = {
final Map<String, File> files = {
'N1': File('data/tanos-jlpt/n1.csv'),
'N2': File('data/tanos-jlpt/n2.csv'),
'N3': File('data/tanos-jlpt/n3.csv'),

View File

@@ -3,52 +3,64 @@ import 'dart:io';
import 'package:csv/csv.dart';
import 'package:jadb/_data_ingestion/tanos-jlpt/objects.dart';
import 'package:xml/xml_events.dart';
Future<List<JLPTRankedWord>> parseJLPTRankedWords(
Map<String, File> files,
) async {
final List<JLPTRankedWord> result = [];
final codec = CsvCodec(
fieldDelimiter: ',',
lineDelimiter: '\n',
quoteMode: QuoteMode.strings,
escapeCharacter: '\\',
);
for (final entry in files.entries) {
final jlptLevel = entry.key;
final file = entry.value;
if (!file.existsSync()) {
throw Exception("File $jlptLevel does not exist");
throw Exception('File $jlptLevel does not exist');
}
final rows = await file
final words = await file
.openRead()
.transform(utf8.decoder)
.transform(CsvToListConverter())
.transform(codec.decoder)
.flatten()
.map((row) {
if (row.length != 3) {
throw Exception('Invalid line in $jlptLevel: $row');
}
return row;
})
.map((row) => row.map((e) => e as String).toList())
.map((row) {
final kanji = row[0].isEmpty
? null
: row[0]
.replaceFirst(RegExp('^お・'), '')
.replaceAll(RegExp(r'.*'), '');
final readings = row[1]
.split(RegExp('[・/、(:?s+)]'))
.map((e) => e.trim())
.toList();
final meanings = row[2].split(',').expand(cleanMeaning).toList();
return JLPTRankedWord(
readings: readings,
kanji: kanji,
jlptLevel: jlptLevel,
meanings: meanings,
);
})
.toList();
for (final row in rows) {
if (row.length != 3) {
throw Exception("Invalid line in $jlptLevel: $row");
}
final kanji = (row[0] as String).isEmpty
? null
: (row[0] as String)
.replaceFirst(RegExp('^お・'), '')
.replaceAll(RegExp(r'.*'), '');
final readings = (row[1] as String)
.split(RegExp('[・/、(:?\s+)]'))
.map((e) => e.trim())
.toList();
final meanings =
(row[2] as String).split(',').expand(cleanMeaning).toList();
result.add(JLPTRankedWord(
readings: readings,
kanji: kanji,
jlptLevel: jlptLevel,
meanings: meanings,
));
}
result.addAll(words);
}
return result;

View File

@@ -13,5 +13,5 @@ class JLPTRankedWord {
@override
String toString() =>
'(${jlptLevel},${kanji},"${readings.join(",")}","${meanings.join(",")})';
'($jlptLevel,$kanji,"${readings.join(",")}","${meanings.join(",")})';
}

View File

@@ -1,4 +1,4 @@
const Map<(String?, String), int?> TANOS_JLPT_OVERRIDES = {
const Map<(String?, String), int?> tanosJLPTOverrides = {
// N5:
(null, 'あなた'): 1223615,
(null, 'あの'): 1000430,

View File

@@ -1,49 +1,39 @@
import 'package:jadb/table_names/jmdict.dart';
import 'package:jadb/_data_ingestion/tanos-jlpt/objects.dart';
import 'package:jadb/_data_ingestion/tanos-jlpt/overrides.dart';
import 'package:jadb/table_names/jmdict.dart';
import 'package:sqflite_common/sqlite_api.dart';
Future<List<int>> _findReadingCandidates(
JLPTRankedWord word,
Database db,
) =>
db
.query(
JMdictTableNames.readingElement,
columns: ['entryId'],
where:
'"reading" IN (${List.filled(word.readings.length, '?').join(',')})',
whereArgs: [...word.readings],
)
.then((rows) => rows.map((row) => row['entryId'] as int).toList());
Future<List<int>> _findReadingCandidates(JLPTRankedWord word, Database db) => db
.query(
JMdictTableNames.readingElement,
columns: ['entryId'],
where:
'"reading" IN (${List.filled(word.readings.length, '?').join(',')})',
whereArgs: [...word.readings],
)
.then((rows) => rows.map((row) => row['entryId'] as int).toList());
Future<List<int>> _findKanjiCandidates(
JLPTRankedWord word,
Database db,
) =>
db
.query(
JMdictTableNames.kanjiElement,
columns: ['entryId'],
where: 'reading = ?',
whereArgs: [word.kanji],
)
.then((rows) => rows.map((row) => row['entryId'] as int).toList());
Future<List<int>> _findKanjiCandidates(JLPTRankedWord word, Database db) => db
.query(
JMdictTableNames.kanjiElement,
columns: ['entryId'],
where: 'reading = ?',
whereArgs: [word.kanji],
)
.then((rows) => rows.map((row) => row['entryId'] as int).toList());
Future<List<(int, String)>> _findSenseCandidates(
JLPTRankedWord word,
Database db,
) =>
db.rawQuery(
) => db
.rawQuery(
'SELECT entryId, phrase '
'FROM "${JMdictTableNames.senseGlossary}" '
'JOIN "${JMdictTableNames.sense}" USING (senseId)'
'WHERE phrase IN (${List.filled(
word.meanings.length,
'?',
).join(',')})',
'WHERE phrase IN (${List.filled(word.meanings.length, '?').join(',')})',
[...word.meanings],
).then(
)
.then(
(rows) => rows
.map((row) => (row['entryId'] as int, row['phrase'] as String))
.toList(),
@@ -55,8 +45,10 @@ Future<int?> findEntry(
bool useOverrides = true,
}) async {
final List<int> readingCandidates = await _findReadingCandidates(word, db);
final List<(int, String)> senseCandidates =
await _findSenseCandidates(word, db);
final List<(int, String)> senseCandidates = await _findSenseCandidates(
word,
db,
);
List<int> entryIds;
@@ -71,8 +63,10 @@ Future<int?> findEntry(
print('No entry found, trying to combine with senses');
entryIds = readingCandidates
.where((readingId) =>
senseCandidates.any((sense) => sense.$1 == readingId))
.where(
(readingId) =>
senseCandidates.any((sense) => sense.$1 == readingId),
)
.toList();
}
} else {
@@ -82,18 +76,21 @@ Future<int?> findEntry(
if ((entryIds.isEmpty || entryIds.length > 1) && useOverrides) {
print('No entry found, trying to fetch from overrides');
final overrideEntries = word.readings
.map((reading) => TANOS_JLPT_OVERRIDES[(word.kanji, reading)])
.map((reading) => tanosJLPTOverrides[(word.kanji, reading)])
.whereType<int>()
.toSet();
if (overrideEntries.length > 1) {
throw Exception(
'Multiple override entries found for ${word.toString()}: $entryIds');
} else if (overrideEntries.length == 0 &&
!word.readings.any((reading) =>
TANOS_JLPT_OVERRIDES.containsKey((word.kanji, reading)))) {
'Multiple override entries found for ${word.toString()}: $entryIds',
);
} else if (overrideEntries.isEmpty &&
!word.readings.any(
(reading) => tanosJLPTOverrides.containsKey((word.kanji, reading)),
)) {
throw Exception(
'No override entry found for ${word.toString()}: $entryIds');
'No override entry found for ${word.toString()}: $entryIds',
);
}
print('Found override: ${overrideEntries.firstOrNull}');
@@ -103,7 +100,8 @@ Future<int?> findEntry(
if (entryIds.length > 1) {
throw Exception(
'Multiple override entries found for ${word.toString()}: $entryIds');
'Multiple override entries found for ${word.toString()}: $entryIds',
);
} else if (entryIds.isEmpty) {
throw Exception('No entry found for ${word.toString()}');
}

View File

@@ -5,20 +5,17 @@ Future<void> seedTanosJLPTData(
Map<String, Set<int>> resolvedEntries,
Database db,
) async {
Batch b = db.batch();
final Batch b = db.batch();
for (final jlptLevel in resolvedEntries.entries) {
final level = jlptLevel.key;
final entryIds = jlptLevel.value;
for (final entryId in entryIds) {
b.insert(
TanosJLPTTableNames.jlptTag,
{
'entryId': entryId,
'jlptLevel': level,
},
);
b.insert(TanosJLPTTableNames.jlptTag, {
'entryId': entryId,
'jlptLevel': level,
});
}
}

View File

@@ -1,14 +1,15 @@
import 'dart:io';
import 'package:args/command_runner.dart';
import 'package:jadb/_data_ingestion/open_local_db.dart';
import 'package:jadb/_data_ingestion/seed_database.dart';
import 'package:args/command_runner.dart';
import 'package:jadb/cli/args.dart';
class CreateDb extends Command {
final name = "create-db";
final description = "Create the database";
@override
final name = 'create-db';
@override
final description = 'Create the database';
CreateDb() {
addLibsqliteArg(argParser);
@@ -23,6 +24,7 @@ class CreateDb extends Command {
);
}
@override
Future<void> run() async {
if (argResults!.option('libsqlite') == null) {
print(argParser.usage);
@@ -35,12 +37,22 @@ class CreateDb extends Command {
readWrite: true,
);
await seedData(db).then((_) {
print("Database created successfully");
}).catchError((error) {
print("Error creating database: $error");
}).whenComplete(() {
db.close();
});
bool failed = false;
await seedData(db)
.then((_) {
print('Database created successfully');
})
.catchError((error) {
print('Error creating database: $error');
failed = true;
})
.whenComplete(() {
db.close();
});
if (failed) {
exit(1);
} else {
exit(0);
}
}
}

View File

@@ -1,8 +1,7 @@
import 'dart:io';
import 'package:jadb/_data_ingestion/open_local_db.dart';
import 'package:args/command_runner.dart';
import 'package:jadb/_data_ingestion/open_local_db.dart';
import 'package:jadb/_data_ingestion/tanos-jlpt/csv_parser.dart';
import 'package:jadb/_data_ingestion/tanos-jlpt/objects.dart';
import 'package:jadb/_data_ingestion/tanos-jlpt/resolve.dart';
@@ -10,9 +9,11 @@ import 'package:jadb/cli/args.dart';
import 'package:sqflite_common/sqlite_api.dart';
class CreateTanosJlptMappings extends Command {
final name = "create-tanos-jlpt-mappings";
@override
final name = 'create-tanos-jlpt-mappings';
@override
final description =
"Resolve Tanos JLPT data against JMDict. This tool is useful to create overrides for ambiguous references";
'Resolve Tanos JLPT data against JMDict. This tool is useful to create overrides for ambiguous references';
CreateTanosJlptMappings() {
addLibsqliteArg(argParser);
@@ -26,6 +27,7 @@ class CreateTanosJlptMappings extends Command {
);
}
@override
Future<void> run() async {
if (argResults!.option('libsqlite') == null ||
argResults!.option('jadb') == null) {
@@ -40,7 +42,7 @@ class CreateTanosJlptMappings extends Command {
final useOverrides = argResults!.flag('overrides');
Map<String, File> files = {
final Map<String, File> files = {
'N1': File('data/tanos-jlpt/n1.csv'),
'N2': File('data/tanos-jlpt/n2.csv'),
'N3': File('data/tanos-jlpt/n3.csv'),
@@ -59,11 +61,12 @@ Future<void> resolveExisting(
Database db,
bool useOverrides,
) async {
List<JLPTRankedWord> missingWords = [];
final List<JLPTRankedWord> missingWords = [];
for (final (i, word) in rankedWords.indexed) {
try {
print(
'[${(i + 1).toString().padLeft(4, '0')}/${rankedWords.length}] ${word.toString()}');
'[${(i + 1).toString().padLeft(4, '0')}/${rankedWords.length}] ${word.toString()}',
);
await findEntry(word, db, useOverrides: useOverrides);
} catch (e) {
print(e);
@@ -78,16 +81,19 @@ Future<void> resolveExisting(
print('Statistics:');
for (final jlptLevel in ['N5', 'N4', 'N3', 'N2', 'N1']) {
final missingWordCount =
missingWords.where((e) => e.jlptLevel == jlptLevel).length;
final totalWordCount =
rankedWords.where((e) => e.jlptLevel == jlptLevel).length;
final missingWordCount = missingWords
.where((e) => e.jlptLevel == jlptLevel)
.length;
final totalWordCount = rankedWords
.where((e) => e.jlptLevel == jlptLevel)
.length;
final failureRate =
((missingWordCount / totalWordCount) * 100).toStringAsFixed(2);
final failureRate = ((missingWordCount / totalWordCount) * 100)
.toStringAsFixed(2);
print(
'${jlptLevel} failures: [${missingWordCount}/${totalWordCount}] (${failureRate}%)');
'$jlptLevel failures: [$missingWordCount/$totalWordCount] ($failureRate%)',
);
}
print('Not able to determine the entry for ${missingWords.length} words');

View File

@@ -1,14 +1,15 @@
// import 'dart:io';
import 'package:args/command_runner.dart';
// import 'package:jadb/_data_ingestion/open_local_db.dart';
import 'package:jadb/cli/args.dart';
import 'package:args/command_runner.dart';
import 'package:jadb/util/lemmatizer/lemmatizer.dart';
class Lemmatize extends Command {
final name = "lemmatize";
final description = "Lemmatize a word using the Jadb lemmatizer";
@override
final name = 'lemmatize';
@override
final description = 'Lemmatize a word using the Jadb lemmatizer';
Lemmatize() {
addLibsqliteArg(argParser);
@@ -21,6 +22,7 @@ class Lemmatize extends Command {
);
}
@override
Future<void> run() async {
// if (argResults!.option('libsqlite') == null ||
// argResults!.option('jadb') == null) {
@@ -41,6 +43,6 @@ class Lemmatize extends Command {
print(result.toString());
print("Lemmatization took ${time.elapsedMilliseconds}ms");
print('Lemmatization took ${time.elapsedMilliseconds}ms');
}
}

View File

@@ -1,27 +1,25 @@
import 'dart:convert';
import 'dart:io';
import 'package:args/command_runner.dart';
import 'package:jadb/_data_ingestion/open_local_db.dart';
import 'package:jadb/cli/args.dart';
import 'package:jadb/search.dart';
import 'package:args/command_runner.dart';
class QueryKanji extends Command {
final name = "query-kanji";
final description = "Query the database for kanji data";
@override
final name = 'query-kanji';
@override
final description = 'Query the database for kanji data';
@override
final invocation = 'jadb query-kanji [options] <kanji>';
QueryKanji() {
addLibsqliteArg(argParser);
addJadbArg(argParser);
argParser.addOption(
'kanji',
abbr: 'k',
help: 'The kanji to search for.',
valueHelp: 'KANJI',
);
}
@override
Future<void> run() async {
if (argResults!.option('libsqlite') == null ||
argResults!.option('jadb') == null) {
@@ -34,18 +32,25 @@ class QueryKanji extends Command {
libsqlitePath: argResults!.option('libsqlite')!,
);
if (argResults!.rest.length != 1) {
print('You need to provide exactly one kanji character to search for.');
print('');
printUsage();
exit(64);
}
final String kanji = argResults!.rest.first.trim();
final time = Stopwatch()..start();
final result = await JaDBConnection(db).jadbSearchKanji(
argResults!.option('kanji') ?? '',
);
final result = await JaDBConnection(db).jadbSearchKanji(kanji);
time.stop();
if (result == null) {
print("No such kanji");
print('No such kanji');
} else {
print(JsonEncoder.withIndent(' ').convert(result.toJson()));
}
print("Query took ${time.elapsedMilliseconds}ms");
print('Query took ${time.elapsedMilliseconds}ms');
}
}

View File

@@ -1,30 +1,38 @@
import 'dart:convert';
import 'dart:io';
import 'package:args/command_runner.dart';
import 'package:jadb/_data_ingestion/open_local_db.dart';
import 'package:jadb/cli/args.dart';
import 'package:jadb/search.dart';
import 'package:args/command_runner.dart';
import 'package:sqflite_common/sqflite.dart';
class QueryWord extends Command {
final name = "query-word";
final description = "Query the database for word data";
@override
final name = 'query-word';
@override
final description = 'Query the database for word data';
@override
final invocation = 'jadb query-word [options] (<word> | <ID>)';
QueryWord() {
addLibsqliteArg(argParser);
addJadbArg(argParser);
argParser.addOption(
'word',
abbr: 'w',
help: 'The word to search for.',
valueHelp: 'WORD',
);
argParser.addFlag('json', abbr: 'j', help: 'Output results in JSON format');
argParser.addOption('page', abbr: 'p', valueHelp: 'NUM', defaultsTo: '0');
argParser.addOption('pageSize', valueHelp: 'NUM', defaultsTo: '30');
}
@override
Future<void> run() async {
if (argResults!.option('libsqlite') == null ||
argResults!.option('jadb') == null) {
print(argParser.usage);
print('You need to provide both libsqlite and jadb paths.');
print('');
printUsage();
exit(64);
}
@@ -33,29 +41,81 @@ class QueryWord extends Command {
libsqlitePath: argResults!.option('libsqlite')!,
);
final String searchWord = argResults!.option('word') ?? 'かな';
if (argResults!.rest.isEmpty) {
print('You need to provide a word or ID to search for.');
print('');
printUsage();
exit(64);
}
final String searchWord = argResults!.rest.join(' ');
final int? maybeId = int.tryParse(searchWord);
if (maybeId != null && maybeId >= 1000000) {
await _searchId(db, maybeId, argResults!.flag('json'));
} else {
await _searchWord(
db,
searchWord,
argResults!.flag('json'),
int.parse(argResults!.option('page')!),
int.parse(argResults!.option('pageSize')!),
);
}
}
Future<void> _searchId(DatabaseExecutor db, int id, bool jsonOutput) async {
final time = Stopwatch()..start();
final result = await JaDBConnection(db).jadbGetWordById(id);
time.stop();
if (result == null) {
print('Invalid ID');
} else {
if (jsonOutput) {
print(JsonEncoder.withIndent(' ').convert(result));
} else {
print(result.toString());
}
}
print('Query took ${time.elapsedMilliseconds}ms');
}
Future<void> _searchWord(
DatabaseExecutor db,
String searchWord,
bool jsonOutput,
int page,
int pageSize,
) async {
final time = Stopwatch()..start();
final count = await JaDBConnection(db).jadbSearchWordCount(searchWord);
time.stop();
final time2 = Stopwatch()..start();
final result = await JaDBConnection(db).jadbSearchWord(searchWord);
final result = await JaDBConnection(
db,
).jadbSearchWord(searchWord, page: page, pageSize: pageSize);
time2.stop();
if (result == null) {
print("Invalid search");
print('Invalid search');
} else if (result.isEmpty) {
print("No matches");
print('No matches');
} else {
for (final e in result) {
print(e.toString());
print("");
if (jsonOutput) {
print(JsonEncoder.withIndent(' ').convert(result));
} else {
for (final e in result) {
print(e.toString());
print('');
}
}
}
print("Total count: ${count}");
print("Count query took ${time.elapsedMilliseconds}ms");
print("Query took ${time2.elapsedMilliseconds}ms");
print('Total count: $count');
print('Count query took ${time.elapsedMilliseconds}ms');
print('Query took ${time2.elapsedMilliseconds}ms');
}
}

View File

@@ -1,6 +1,5 @@
/// Jouyou kanji sorted primarily by grades and secondarily by strokes.
const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
{
const Map<int, Map<int, List<String>>> jouyouKanjiByGradeAndStrokeCount = {
1: {
1: [''],
2: ['', '', '', '', '', '', '', ''],
@@ -12,7 +11,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
8: ['', '', '', '', '', ''],
9: ['', ''],
10: [''],
12: ['']
12: [''],
},
2: {
2: [''],
@@ -35,7 +34,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
5: ['', '', '', '', '', '', '', '', '', '', '', ''],
6: [
@@ -58,7 +57,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
7: [
'',
@@ -78,7 +77,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
8: [
'',
@@ -95,7 +94,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
9: [
'',
@@ -115,7 +114,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
10: ['', '', '', '', '', '', '', '', '', '', '', ''],
11: ['', '', '', '', '', '', '', '', '', '', '', '', ''],
@@ -124,7 +123,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
14: ['', '', '', '', '', ''],
15: [''],
16: ['', ''],
18: ['', '']
18: ['', ''],
},
3: {
2: [''],
@@ -146,7 +145,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
6: ['', '', '', '', '', '', '', '', '', '', '', '', '', ''],
7: ['', '', '', '', '', '', '', '', '', '', '', '', '', ''],
@@ -178,7 +177,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
9: [
'',
@@ -210,7 +209,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
10: [
'',
@@ -232,7 +231,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
11: [
'',
@@ -253,7 +252,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
12: [
'',
@@ -282,13 +281,13 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
13: ['', '', '', '', '', '', '', '', '', '', ''],
14: ['', '', '', '', '', ''],
15: ['', '調', '', ''],
16: ['', '', '', ''],
18: ['']
18: [''],
},
4: {
4: ['', '', '', '', ''],
@@ -318,7 +317,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
8: [
'',
@@ -346,7 +345,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
9: [
'',
@@ -367,7 +366,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
10: [
'',
@@ -389,7 +388,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
11: [
'',
@@ -410,7 +409,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
12: [
'',
@@ -434,7 +433,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
13: ['', '', '', '', '', '', '', '', '', '', ''],
14: ['', '', '', '', '', '', '', '', '', ''],
@@ -442,7 +441,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
16: ['', '', ''],
18: ['', '', ''],
19: ['', ''],
20: ['', '']
20: ['', ''],
},
5: {
3: ['', ''],
@@ -464,7 +463,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
8: [
'',
@@ -484,7 +483,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
9: ['', '', '', '', '', '', '', '', '', '', '', '', ''],
10: [
@@ -505,7 +504,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
11: [
'',
@@ -537,7 +536,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
12: [
'貿',
@@ -561,7 +560,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
13: ['', '', '', '', '', '', '', '', '', '', '', '', '', ''],
14: [
@@ -583,14 +582,14 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
15: ['', '', '', '', '', '', '', ''],
16: ['', '', '', '', ''],
17: ['', '', ''],
18: ['', '', ''],
19: [''],
20: ['']
20: [''],
},
6: {
3: ['', '', '', ''],
@@ -618,7 +617,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'沿',
''
'',
],
9: [
'',
@@ -641,7 +640,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
10: [
'',
@@ -667,7 +666,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
11: [
'',
@@ -689,7 +688,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
12: [
'',
@@ -710,7 +709,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
13: [
'',
@@ -727,14 +726,14 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
14: ['', '', '', '', '', '', '', '', '', '', '', ''],
15: ['', '', '', '', '', '', '', '', '', ''],
16: ['', '', '', '', '', '', '', ''],
17: ['', '', '', ''],
18: ['', '', ''],
19: ['', '']
19: ['', ''],
},
7: {
1: [''],
@@ -760,7 +759,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
5: [
'',
@@ -790,7 +789,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
6: [
'',
@@ -831,7 +830,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
7: [
'',
@@ -896,7 +895,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
8: [
'',
@@ -989,7 +988,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
9: [
'',
@@ -1081,7 +1080,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
10: [
'',
@@ -1206,7 +1205,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
11: [
'',
@@ -1323,7 +1322,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
12: [
'',
@@ -1435,7 +1434,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
13: [
'',
@@ -1552,7 +1551,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
14: [
'',
@@ -1617,7 +1616,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
15: [
'',
@@ -1706,7 +1705,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
16: [
'',
@@ -1764,7 +1763,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
17: [
'',
@@ -1801,7 +1800,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
18: [
'',
@@ -1830,7 +1829,7 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
19: [
'',
@@ -1851,22 +1850,23 @@ const Map<int, Map<int, List<String>>> JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT =
'',
'',
'',
''
'',
],
20: ['', '', '', '', '', '', '', ''],
21: ['', '', '', '', '', ''],
22: ['', '', ''],
23: [''],
29: ['']
29: [''],
},
};
final Map<int, List<String>> JOUYOU_KANJI_BY_GRADES =
JOUYOU_KANJI_BY_GRADE_AND_STROKE_COUNT.entries
final Map<int, List<String>> jouyouKanjiByGrades =
jouyouKanjiByGradeAndStrokeCount.entries
.expand((entry) => entry.value.entries)
.map((entry) => MapEntry(entry.key, entry.value))
.fold<Map<int, List<String>>>(
{},
(acc, entry) => acc
..putIfAbsent(entry.key, () => [])
..update(entry.key, (value) => value..addAll(entry.value)));
{},
(acc, entry) => acc
..putIfAbsent(entry.key, () => [])
..update(entry.key, (value) => value..addAll(entry.value)),
);

View File

@@ -1,4 +1,4 @@
const Map<int, List<String>> RADICALS = {
const Map<int, List<String>> radicals = {
1: ['', '', '', '', '', ''],
2: [
'',
@@ -31,7 +31,7 @@ const Map<int, List<String>> RADICALS = {
'',
'',
'',
'𠂉'
'𠂉',
],
3: [
'',
@@ -78,7 +78,7 @@ const Map<int, List<String>> RADICALS = {
'',
'',
'',
''
'',
],
4: [
'',
@@ -124,7 +124,7 @@ const Map<int, List<String>> RADICALS = {
'',
'',
'',
''
'',
],
5: [
'',
@@ -154,7 +154,7 @@ const Map<int, List<String>> RADICALS = {
'',
'',
'',
''
'',
],
6: [
'',
@@ -181,7 +181,7 @@ const Map<int, List<String>> RADICALS = {
'',
'',
'',
'西'
'西',
],
7: [
'',
@@ -204,7 +204,7 @@ const Map<int, List<String>> RADICALS = {
'',
'',
'',
''
'',
],
8: ['', '', '', '', '', '', '', '', '', '', '', ''],
9: ['', '', '', '', '', '', '', '', '', '', ''],

View File

@@ -43,6 +43,7 @@ enum JlptLevel implements Comparable<JlptLevel> {
int? get asInt =>
this == JlptLevel.none ? null : JlptLevel.values.indexOf(this);
@override
String toString() => toNullableString() ?? 'N/A';
Object? toJson() => toNullableString();

View File

@@ -11,7 +11,7 @@ String migrationDirPath() {
}
Future<void> createEmptyDb(DatabaseExecutor db) async {
List<String> migrationFiles = [];
final List<String> migrationFiles = [];
for (final file in Directory(migrationDirPath()).listSync()) {
if (file is File && file.path.endsWith('.sql')) {
migrationFiles.add(file.path);

View File

@@ -19,20 +19,14 @@ enum JMdictDialect {
final String id;
final String description;
const JMdictDialect({
required this.id,
required this.description,
});
const JMdictDialect({required this.id, required this.description});
static JMdictDialect fromId(String id) => JMdictDialect.values.firstWhere(
(e) => e.id == id,
orElse: () => throw Exception('Unknown id: $id'),
);
(e) => e.id == id,
orElse: () => throw Exception('Unknown id: $id'),
);
Map<String, Object?> toJson() => {
'id': id,
'description': description,
};
Map<String, Object?> toJson() => {'id': id, 'description': description};
static JMdictDialect fromJson(Map<String, Object?> json) =>
JMdictDialect.values.firstWhere(

View File

@@ -102,20 +102,14 @@ enum JMdictField {
final String id;
final String description;
const JMdictField({
required this.id,
required this.description,
});
const JMdictField({required this.id, required this.description});
static JMdictField fromId(String id) => JMdictField.values.firstWhere(
(e) => e.id == id,
orElse: () => throw Exception('Unknown id: $id'),
);
(e) => e.id == id,
orElse: () => throw Exception('Unknown id: $id'),
);
Map<String, Object?> toJson() => {
'id': id,
'description': description,
};
Map<String, Object?> toJson() => {'id': id, 'description': description};
static JMdictField fromJson(Map<String, Object?> json) =>
JMdictField.values.firstWhere(

View File

@@ -13,20 +13,14 @@ enum JMdictKanjiInfo {
final String id;
final String description;
const JMdictKanjiInfo({
required this.id,
required this.description,
});
const JMdictKanjiInfo({required this.id, required this.description});
static JMdictKanjiInfo fromId(String id) => JMdictKanjiInfo.values.firstWhere(
(e) => e.id == id,
orElse: () => throw Exception('Unknown id: $id'),
);
(e) => e.id == id,
orElse: () => throw Exception('Unknown id: $id'),
);
Map<String, Object?> toJson() => {
'id': id,
'description': description,
};
Map<String, Object?> toJson() => {'id': id, 'description': description};
static JMdictKanjiInfo fromJson(Map<String, Object?> json) =>
JMdictKanjiInfo.values.firstWhere(

View File

@@ -74,20 +74,14 @@ enum JMdictMisc {
final String id;
final String description;
const JMdictMisc({
required this.id,
required this.description,
});
const JMdictMisc({required this.id, required this.description});
static JMdictMisc fromId(String id) => JMdictMisc.values.firstWhere(
(e) => e.id == id,
orElse: () => throw Exception('Unknown id: $id'),
);
(e) => e.id == id,
orElse: () => throw Exception('Unknown id: $id'),
);
Map<String, Object?> toJson() => {
'id': id,
'description': description,
};
Map<String, Object?> toJson() => {'id': id, 'description': description};
static JMdictMisc fromJson(Map<String, Object?> json) =>
JMdictMisc.values.firstWhere(

View File

@@ -202,14 +202,11 @@ enum JMdictPOS {
String get shortDescription => _shortDescription ?? description;
static JMdictPOS fromId(String id) => JMdictPOS.values.firstWhere(
(e) => e.id == id,
orElse: () => throw Exception('Unknown id: $id'),
);
(e) => e.id == id,
orElse: () => throw Exception('Unknown id: $id'),
);
Map<String, Object?> toJson() => {
'id': id,
'description': description,
};
Map<String, Object?> toJson() => {'id': id, 'description': description};
static JMdictPOS fromJson(Map<String, Object?> json) =>
JMdictPOS.values.firstWhere(

View File

@@ -15,10 +15,7 @@ enum JMdictReadingInfo {
final String id;
final String description;
const JMdictReadingInfo({
required this.id,
required this.description,
});
const JMdictReadingInfo({required this.id, required this.description});
static JMdictReadingInfo fromId(String id) =>
JMdictReadingInfo.values.firstWhere(
@@ -26,10 +23,7 @@ enum JMdictReadingInfo {
orElse: () => throw Exception('Unknown id: $id'),
);
Map<String, Object?> toJson() => {
'id': id,
'description': description,
};
Map<String, Object?> toJson() => {'id': id, 'description': description};
static JMdictReadingInfo fromJson(Map<String, Object?> json) =>
JMdictReadingInfo.values.firstWhere(

View File

@@ -26,19 +26,14 @@ class KanjiSearchRadical extends Equatable {
});
@override
List<Object> get props => [
symbol,
this.names,
forms,
meanings,
];
List<Object> get props => [symbol, names, forms, meanings];
Map<String, dynamic> toJson() => {
'symbol': symbol,
'names': names,
'forms': forms,
'meanings': meanings,
};
'symbol': symbol,
'names': names,
'forms': forms,
'meanings': meanings,
};
factory KanjiSearchRadical.fromJson(Map<String, dynamic> json) {
return KanjiSearchRadical(

View File

@@ -89,46 +89,46 @@ class KanjiSearchResult extends Equatable {
@override
// ignore: public_member_api_docs
List<Object?> get props => [
taughtIn,
jlptLevel,
newspaperFrequencyRank,
strokeCount,
meanings,
kunyomi,
onyomi,
// kunyomiExamples,
// onyomiExamples,
radical,
parts,
codepoints,
kanji,
nanori,
alternativeLanguageReadings,
strokeMiscounts,
queryCodes,
dictionaryReferences,
];
taughtIn,
jlptLevel,
newspaperFrequencyRank,
strokeCount,
meanings,
kunyomi,
onyomi,
// kunyomiExamples,
// onyomiExamples,
radical,
parts,
codepoints,
kanji,
nanori,
alternativeLanguageReadings,
strokeMiscounts,
queryCodes,
dictionaryReferences,
];
Map<String, dynamic> toJson() => {
'kanji': kanji,
'taughtIn': taughtIn,
'jlptLevel': jlptLevel,
'newspaperFrequencyRank': newspaperFrequencyRank,
'strokeCount': strokeCount,
'meanings': meanings,
'kunyomi': kunyomi,
'onyomi': onyomi,
// 'onyomiExamples': onyomiExamples,
// 'kunyomiExamples': kunyomiExamples,
'radical': radical?.toJson(),
'parts': parts,
'codepoints': codepoints,
'nanori': nanori,
'alternativeLanguageReadings': alternativeLanguageReadings,
'strokeMiscounts': strokeMiscounts,
'queryCodes': queryCodes,
'dictionaryReferences': dictionaryReferences,
};
'kanji': kanji,
'taughtIn': taughtIn,
'jlptLevel': jlptLevel,
'newspaperFrequencyRank': newspaperFrequencyRank,
'strokeCount': strokeCount,
'meanings': meanings,
'kunyomi': kunyomi,
'onyomi': onyomi,
// 'onyomiExamples': onyomiExamples,
// 'kunyomiExamples': kunyomiExamples,
'radical': radical?.toJson(),
'parts': parts,
'codepoints': codepoints,
'nanori': nanori,
'alternativeLanguageReadings': alternativeLanguageReadings,
'strokeMiscounts': strokeMiscounts,
'queryCodes': queryCodes,
'dictionaryReferences': dictionaryReferences,
};
factory KanjiSearchResult.fromJson(Map<String, dynamic> json) {
return KanjiSearchResult(
@@ -156,23 +156,20 @@ class KanjiSearchResult extends Equatable {
nanori: (json['nanori'] as List).map((e) => e as String).toList(),
alternativeLanguageReadings:
(json['alternativeLanguageReadings'] as Map<String, dynamic>).map(
(key, value) => MapEntry(
key,
(value as List).map((e) => e as String).toList(),
),
),
strokeMiscounts:
(json['strokeMiscounts'] as List).map((e) => e as int).toList(),
(key, value) =>
MapEntry(key, (value as List).map((e) => e as String).toList()),
),
strokeMiscounts: (json['strokeMiscounts'] as List)
.map((e) => e as int)
.toList(),
queryCodes: (json['queryCodes'] as Map<String, dynamic>).map(
(key, value) => MapEntry(
key,
(value as List).map((e) => e as String).toList(),
),
(key, value) =>
MapEntry(key, (value as List).map((e) => e as String).toList()),
),
dictionaryReferences:
(json['dictionaryReferences'] as Map<String, dynamic>).map(
(key, value) => MapEntry(key, value as String),
),
(key, value) => MapEntry(key, value as String),
),
);
}
}

View File

@@ -1,5 +1,6 @@
import 'package:jadb/table_names/jmdict.dart';
import 'package:jadb/table_names/kanjidic.dart';
import 'package:jadb/table_names/kanjivg.dart';
import 'package:jadb/table_names/radkfile.dart';
import 'package:jadb/table_names/tanos_jlpt.dart';
import 'package:sqflite_common/sqlite_api.dart';
@@ -7,33 +8,36 @@ import 'package:sqflite_common/sqlite_api.dart';
Future<void> verifyTablesWithDbConnection(DatabaseExecutor db) async {
final Set<String> tables = await db
.query(
'sqlite_master',
columns: ['name'],
where: 'type = ?',
whereArgs: ['table'],
)
'sqlite_master',
columns: ['name'],
where: 'type = ?',
whereArgs: ['table'],
)
.then((result) {
return result.map((row) => row['name'] as String).toSet();
});
return result.map((row) => row['name'] as String).toSet();
});
final Set<String> expectedTables = {
...JMdictTableNames.allTables,
...KANJIDICTableNames.allTables,
...RADKFILETableNames.allTables,
...TanosJLPTTableNames.allTables,
...KanjiVGTableNames.allTables,
};
final missingTables = expectedTables.difference(tables);
if (missingTables.isNotEmpty) {
throw Exception([
'Missing tables:',
missingTables.map((table) => ' - $table').join('\n'),
'',
'Found tables:\n',
tables.map((table) => ' - $table').join('\n'),
'',
'Please ensure the database is correctly set up.',
].join('\n'));
throw Exception(
[
'Missing tables:',
missingTables.map((table) => ' - $table').join('\n'),
'',
'Found tables:\n',
tables.map((table) => ' - $table').join('\n'),
'',
'Please ensure the database is correctly set up.',
].join('\n'),
);
}
}

View File

@@ -0,0 +1,62 @@
enum WordSearchMatchSpanType { kanji, kana, sense }
/// A span of a word search result that corresponds to a match for a kanji, kana, or sense.
class WordSearchMatchSpan {
/// Which subtype of the word search result this span corresponds to - either a kanji, a kana, or a sense.
final WordSearchMatchSpanType spanType;
/// The index of the kanji/kana/sense in the word search result that this span corresponds to.
final int index;
/// When matching a 'sense', this is the index of the English definition in that sense that this span corresponds to. Otherwise, this is always 0.
final int subIndex;
/// The start of the span (inclusive)
final int start;
/// The end of the span (inclusive)
final int end;
WordSearchMatchSpan({
required this.spanType,
required this.index,
required this.start,
required this.end,
this.subIndex = 0,
});
@override
String toString() {
return 'WordSearchMatchSpan(spanType: $spanType, index: $index, start: $start, end: $end)';
}
Map<String, Object?> toJson() => {
'spanType': spanType.toString().split('.').last,
'index': index,
'start': start,
'end': end,
};
factory WordSearchMatchSpan.fromJson(Map<String, dynamic> json) =>
WordSearchMatchSpan(
spanType: WordSearchMatchSpanType.values.firstWhere(
(e) => e.toString().split('.').last == json['spanType'],
),
index: json['index'] as int,
start: json['start'] as int,
end: json['end'] as int,
);
@override
int get hashCode => Object.hash(spanType, index, start, end);
@override
bool operator ==(Object other) {
if (identical(this, other)) return true;
return other is WordSearchMatchSpan &&
other.spanType == spanType &&
other.index == index &&
other.start == start &&
other.end == end;
}
}

View File

@@ -1,9 +1,12 @@
import 'package:jadb/models/common/jlpt_level.dart';
import 'package:jadb/models/jmdict/jmdict_kanji_info.dart';
import 'package:jadb/models/jmdict/jmdict_reading_info.dart';
import 'package:jadb/models/word_search/word_search_match_span.dart';
import 'package:jadb/models/word_search/word_search_ruby.dart';
import 'package:jadb/models/word_search/word_search_sense.dart';
import 'package:jadb/models/word_search/word_search_sources.dart';
import 'package:jadb/search/word_search/word_search.dart';
import 'package:jadb/util/romaji_transliteration.dart';
/// A class representing a single dictionary entry from a word search.
class WordSearchResult {
@@ -34,7 +37,44 @@ class WordSearchResult {
/// A class listing the sources used to make up the data for this word search result.
final WordSearchSources sources;
const WordSearchResult({
/// A list of spans, specifying which part of this word result matched the search keyword.
///
/// Note that this is considered ephemeral data - it does not originate from the dictionary,
/// and unlike the rest of the class it varies based on external information (the searchword).
/// It will *NOT* be exported to JSON, but can be reinferred by invoking [inferMatchSpans] with
/// the original searchword.
List<WordSearchMatchSpan>? matchSpans;
/// All contents of [japanese], transliterated to romaji
List<String> get romaji => japanese
.map((word) => transliterateKanaToLatin(word.furigana ?? word.base))
.toList();
/// All contents of [japanase], where the furigana has either been transliterated to romaji, or
/// contains the furigana transliteration of [WordSearchRuby.base].
List<WordSearchRuby> get romajiRubys => japanese
.map(
(word) => WordSearchRuby(
base: word.base,
furigana: word.furigana != null
? transliterateKanaToLatin(word.furigana!)
: transliterateKanaToLatin(word.base),
),
)
.toList();
/// The same list of spans as [matchSpans], but the positions have been adjusted for romaji conversion
///
/// This is mostly useful in conjunction with [romajiRubys].
List<WordSearchMatchSpan>? get romajiMatchSpans {
if (matchSpans == null) {
return null;
}
throw UnimplementedError('Not yet implemented');
}
WordSearchResult({
required this.score,
required this.entryId,
required this.isCommon,
@@ -44,21 +84,22 @@ class WordSearchResult {
required this.senses,
required this.jlptLevel,
required this.sources,
this.matchSpans,
});
Map<String, dynamic> toJson() => {
'_score': score,
'entryId': entryId,
'isCommon': isCommon,
'japanese': japanese.map((e) => e.toJson()).toList(),
'kanjiInfo':
kanjiInfo.map((key, value) => MapEntry(key, value.toJson())),
'readingInfo':
readingInfo.map((key, value) => MapEntry(key, value.toJson())),
'senses': senses.map((e) => e.toJson()).toList(),
'jlptLevel': jlptLevel.toJson(),
'sources': sources.toJson(),
};
'_score': score,
'entryId': entryId,
'isCommon': isCommon,
'japanese': japanese.map((e) => e.toJson()).toList(),
'kanjiInfo': kanjiInfo.map((key, value) => MapEntry(key, value.toJson())),
'readingInfo': readingInfo.map(
(key, value) => MapEntry(key, value.toJson()),
),
'senses': senses.map((e) => e.toJson()).toList(),
'jlptLevel': jlptLevel.toJson(),
'sources': sources.toJson(),
};
factory WordSearchResult.fromJson(Map<String, dynamic> json) =>
WordSearchResult(
@@ -81,17 +122,88 @@ class WordSearchResult {
sources: WordSearchSources.fromJson(json['sources']),
);
String _formatJapaneseWord(WordSearchRuby word) =>
word.furigana == null ? word.base : "${word.base} (${word.furigana})";
factory WordSearchResult.empty() => WordSearchResult(
score: 0,
entryId: 0,
isCommon: false,
japanese: [],
kanjiInfo: {},
readingInfo: {},
senses: [],
jlptLevel: JlptLevel.none,
sources: WordSearchSources.empty(),
);
/// Infers which part(s) of this word search result matched the search keyword, and populates [matchSpans] accordingly.
void inferMatchSpans(
String searchword, {
SearchMode searchMode = SearchMode.auto,
}) {
// TODO: handle wildcards like '?' and '*' when that becomes supported in the search.
// TODO: If the searchMode is provided, we can use that to narrow down which part of the word search results to look at.
final regex = RegExp(RegExp.escape(searchword));
final matchSpans = <WordSearchMatchSpan>[];
for (final (i, japanese) in japanese.indexed) {
final baseMatches = regex.allMatches(japanese.base);
matchSpans.addAll(
baseMatches.map(
(match) => WordSearchMatchSpan(
spanType: WordSearchMatchSpanType.kanji,
index: i,
start: match.start,
end: match.end,
),
),
);
if (japanese.furigana != null) {
final furiganaMatches = regex.allMatches(japanese.furigana!);
matchSpans.addAll(
furiganaMatches.map(
(match) => WordSearchMatchSpan(
spanType: WordSearchMatchSpanType.kana,
index: i,
start: match.start,
end: match.end,
),
),
);
}
}
for (final (i, sense) in senses.indexed) {
for (final (k, definition) in sense.englishDefinitions.indexed) {
final definitionMatches = regex.allMatches(definition);
matchSpans.addAll(
definitionMatches.map(
(match) => WordSearchMatchSpan(
spanType: WordSearchMatchSpanType.sense,
index: i,
subIndex: k,
start: match.start,
end: match.end,
),
),
);
}
}
this.matchSpans = matchSpans;
}
static String _formatJapaneseWord(WordSearchRuby word) =>
word.furigana == null ? word.base : '${word.base} (${word.furigana})';
@override
String toString() {
final japaneseWord = _formatJapaneseWord(japanese[0]);
final isCommonString = isCommon ? '(C)' : '';
final jlptLevelString = "(${jlptLevel.toString()})";
final jlptLevelString = '(${jlptLevel.toString()})';
return '''
${score} | [$entryId] $japaneseWord $isCommonString $jlptLevelString
$score | [$entryId] $japaneseWord $isCommonString $jlptLevelString
Other forms: ${japanese.skip(1).map(_formatJapaneseWord).join(', ')}
Senses: ${senses.map((s) => s.englishDefinitions).join(', ')}
'''

View File

@@ -6,18 +6,12 @@ class WordSearchRuby {
/// Furigana, if applicable.
String? furigana;
WordSearchRuby({
required this.base,
this.furigana,
});
WordSearchRuby({required this.base, this.furigana});
Map<String, dynamic> toJson() => {
'base': base,
'furigana': furigana,
};
Map<String, dynamic> toJson() => {'base': base, 'furigana': furigana};
factory WordSearchRuby.fromJson(Map<String, dynamic> json) => WordSearchRuby(
base: json['base'] as String,
furigana: json['furigana'] as String?,
);
base: json['base'] as String,
furigana: json['furigana'] as String?,
);
}

View File

@@ -71,18 +71,18 @@ class WordSearchSense {
languageSource.isEmpty;
Map<String, dynamic> toJson() => {
'englishDefinitions': englishDefinitions,
'partsOfSpeech': partsOfSpeech.map((e) => e.toJson()).toList(),
'seeAlso': seeAlso.map((e) => e.toJson()).toList(),
'antonyms': antonyms.map((e) => e.toJson()).toList(),
'restrictedToReading': restrictedToReading,
'restrictedToKanji': restrictedToKanji,
'fields': fields.map((e) => e.toJson()).toList(),
'dialects': dialects.map((e) => e.toJson()).toList(),
'misc': misc.map((e) => e.toJson()).toList(),
'info': info,
'languageSource': languageSource,
};
'englishDefinitions': englishDefinitions,
'partsOfSpeech': partsOfSpeech.map((e) => e.toJson()).toList(),
'seeAlso': seeAlso.map((e) => e.toJson()).toList(),
'antonyms': antonyms.map((e) => e.toJson()).toList(),
'restrictedToReading': restrictedToReading,
'restrictedToKanji': restrictedToKanji,
'fields': fields.map((e) => e.toJson()).toList(),
'dialects': dialects.map((e) => e.toJson()).toList(),
'misc': misc.map((e) => e.toJson()).toList(),
'info': info,
'languageSource': languageSource,
};
factory WordSearchSense.fromJson(Map<String, dynamic> json) =>
WordSearchSense(
@@ -104,8 +104,9 @@ class WordSearchSense {
dialects: (json['dialects'] as List)
.map((e) => JMdictDialect.fromJson(e))
.toList(),
misc:
(json['misc'] as List).map((e) => JMdictMisc.fromJson(e)).toList(),
misc: (json['misc'] as List)
.map((e) => JMdictMisc.fromJson(e))
.toList(),
info: List<String>.from(json['info']),
languageSource: (json['languageSource'] as List)
.map((e) => WordSearchSenseLanguageSource.fromJson(e))

View File

@@ -13,11 +13,11 @@ class WordSearchSenseLanguageSource {
});
Map<String, Object?> toJson() => {
'language': language,
'phrase': phrase,
'fullyDescribesSense': fullyDescribesSense,
'constructedFromSmallerWords': constructedFromSmallerWords,
};
'language': language,
'phrase': phrase,
'fullyDescribesSense': fullyDescribesSense,
'constructedFromSmallerWords': constructedFromSmallerWords,
};
factory WordSearchSenseLanguageSource.fromJson(Map<String, dynamic> json) =>
WordSearchSenseLanguageSource(

View File

@@ -7,20 +7,13 @@ class WordSearchSources {
/// Whether JMnedict was used.
final bool jmnedict;
const WordSearchSources({
this.jmdict = true,
this.jmnedict = false,
});
const WordSearchSources({this.jmdict = true, this.jmnedict = false});
Map<String, Object?> get sqlValue => {
'jmdict': jmdict,
'jmnedict': jmnedict,
};
factory WordSearchSources.empty() => const WordSearchSources();
Map<String, dynamic> toJson() => {
'jmdict': jmdict,
'jmnedict': jmnedict,
};
Map<String, Object?> get sqlValue => {'jmdict': jmdict, 'jmnedict': jmnedict};
Map<String, dynamic> toJson() => {'jmdict': jmdict, 'jmnedict': jmnedict};
factory WordSearchSources.fromJson(Map<String, dynamic> json) =>
WordSearchSources(

View File

@@ -1,3 +1,5 @@
import 'package:jadb/models/word_search/word_search_result.dart';
/// A cross-reference entry from one word-result to another entry.
class WordSearchXrefEntry {
/// The ID of the entry that this entry cross-references to.
@@ -13,19 +15,24 @@ class WordSearchXrefEntry {
/// database (and hence might be incorrect).
final bool ambiguous;
/// The result of the cross-reference, may or may not be included in the query.
final WordSearchResult? xrefResult;
const WordSearchXrefEntry({
required this.entryId,
required this.ambiguous,
required this.baseWord,
required this.furigana,
required this.xrefResult,
});
Map<String, dynamic> toJson() => {
'entryId': entryId,
'ambiguous': ambiguous,
'baseWord': baseWord,
'furigana': furigana,
};
'entryId': entryId,
'ambiguous': ambiguous,
'baseWord': baseWord,
'furigana': furigana,
'xrefResult': xrefResult?.toJson(),
};
factory WordSearchXrefEntry.fromJson(Map<String, dynamic> json) =>
WordSearchXrefEntry(
@@ -33,5 +40,6 @@ class WordSearchXrefEntry {
ambiguous: json['ambiguous'] as bool,
baseWord: json['baseWord'] as String,
furigana: json['furigana'] as String?,
xrefResult: null,
);
}

View File

@@ -1,12 +1,10 @@
import 'package:jadb/models/kanji_search/kanji_search_result.dart';
import 'package:jadb/models/verify_tables.dart';
import 'package:jadb/models/word_search/word_search_result.dart';
import 'package:jadb/models/kanji_search/kanji_search_result.dart';
import 'package:jadb/search/filter_kanji.dart';
import 'package:jadb/search/kanji_search.dart';
import 'package:jadb/search/radical_search.dart';
import 'package:jadb/search/word_search/word_search.dart';
import 'package:jadb/search/kanji_search.dart';
import 'package:sqflite_common/sqlite_api.dart';
extension JaDBConnection on DatabaseExecutor {
@@ -19,38 +17,45 @@ extension JaDBConnection on DatabaseExecutor {
Future<KanjiSearchResult?> jadbSearchKanji(String kanji) =>
searchKanjiWithDbConnection(this, kanji);
/// Search for a kanji in the database.
Future<Map<String, KanjiSearchResult>> jadbGetManyKanji(Set<String> kanji) =>
searchManyKanjiWithDbConnection(this, kanji);
/// Filter a list of characters, and return the ones that are listed in the kanji dictionary.
Future<List<String>> filterKanji(
List<String> kanji, {
bool deduplicate = false,
}) =>
filterKanjiWithDbConnection(this, kanji, deduplicate);
}) => filterKanjiWithDbConnection(this, kanji, deduplicate);
/// Search for a word in the database.
Future<List<WordSearchResult>?> jadbSearchWord(
String word, {
SearchMode searchMode = SearchMode.Auto,
SearchMode searchMode = SearchMode.auto,
int page = 0,
int pageSize = 10,
}) =>
searchWordWithDbConnection(
this,
word,
searchMode,
page,
pageSize,
);
int? pageSize,
}) => searchWordWithDbConnection(
this,
word,
searchMode: searchMode,
page: page,
pageSize: pageSize,
);
///
Future<WordSearchResult?> jadbGetWordById(int id) =>
getWordByIdWithDbConnection(this, id);
/// Get a list of words by their IDs.
///
/// IDs for which no result is found are omitted from the returned value.
Future<Map<int, WordSearchResult>> jadbGetManyWordsByIds(Set<int> ids) =>
getWordsByIdsWithDbConnection(this, ids);
/// Search for a word in the database, and return the count of results.
Future<int?> jadbSearchWordCount(
String word, {
SearchMode searchMode = SearchMode.Auto,
}) =>
searchWordCountWithDbConnection(this, word, searchMode);
SearchMode searchMode = SearchMode.auto,
}) => searchWordCountWithDbConnection(this, word, searchMode: searchMode);
/// Given a list of radicals, search which kanji contains all
/// of the radicals, find their other radicals, and return those.

View File

@@ -1,22 +1,32 @@
import 'package:jadb/table_names/kanjidic.dart';
import 'package:sqflite_common/sqflite.dart';
/// Filters a list of kanji characters, returning only those that exist in the database.
///
/// If [deduplicate] is true, the returned list will deduplicate the input kanji list before returning the filtered results.
Future<List<String>> filterKanjiWithDbConnection(
DatabaseExecutor connection,
List<String> kanji,
bool deduplicate,
) async {
final Set<String> filteredKanji = await connection.rawQuery(
'''
final Set<String> filteredKanji = await connection
.rawQuery('''
SELECT "literal"
FROM "${KANJIDICTableNames.character}"
WHERE "literal" IN (${kanji.map((_) => '?').join(',')})
''',
kanji,
).then((value) => value.map((e) => e['literal'] as String).toSet());
''', kanji)
.then((value) => value.map((e) => e['literal'] as String).toSet());
if (deduplicate) {
return filteredKanji.toList();
final List<String> result = [];
final Set<String> seen = {};
for (final k in kanji) {
if (filteredKanji.contains(k) && !seen.contains(k)) {
result.add(k);
seen.add(k);
}
}
return result;
} else {
return kanji.where((k) => filteredKanji.contains(k)).toList();
}

View File

@@ -1,143 +1,190 @@
import 'package:collection/collection.dart';
import 'package:jadb/table_names/kanjidic.dart';
import 'package:jadb/table_names/radkfile.dart';
import 'package:jadb/models/kanji_search/kanji_search_radical.dart';
import 'package:jadb/models/kanji_search/kanji_search_result.dart';
import 'package:jadb/table_names/kanjidic.dart';
import 'package:jadb/table_names/radkfile.dart';
import 'package:sqflite_common/sqflite.dart';
Future<List<Map<String, Object?>>> _charactersQuery(
DatabaseExecutor connection,
String kanji,
) => connection.query(
KANJIDICTableNames.character,
where: 'literal = ?',
whereArgs: [kanji],
);
Future<List<Map<String, Object?>>> _codepointsQuery(
DatabaseExecutor connection,
String kanji,
) => connection.query(
KANJIDICTableNames.codepoint,
where: 'kanji = ?',
whereArgs: [kanji],
);
Future<List<Map<String, Object?>>> _kunyomisQuery(
DatabaseExecutor connection,
String kanji,
) => connection.query(
KANJIDICTableNames.kunyomi,
where: 'kanji = ?',
whereArgs: [kanji],
orderBy: 'orderNum',
);
Future<List<Map<String, Object?>>> _onyomisQuery(
DatabaseExecutor connection,
String kanji,
) => connection.query(
KANJIDICTableNames.onyomi,
where: 'kanji = ?',
whereArgs: [kanji],
orderBy: 'orderNum',
);
Future<List<Map<String, Object?>>> _meaningsQuery(
DatabaseExecutor connection,
String kanji,
) => connection.query(
KANJIDICTableNames.meaning,
where: 'kanji = ? AND language = ?',
whereArgs: [kanji, 'eng'],
orderBy: 'orderNum',
);
Future<List<Map<String, Object?>>> _nanorisQuery(
DatabaseExecutor connection,
String kanji,
) => connection.query(
KANJIDICTableNames.nanori,
where: 'kanji = ?',
whereArgs: [kanji],
);
Future<List<Map<String, Object?>>> _dictionaryReferencesQuery(
DatabaseExecutor connection,
String kanji,
) => connection.query(
KANJIDICTableNames.dictionaryReference,
where: 'kanji = ?',
whereArgs: [kanji],
);
Future<List<Map<String, Object?>>> _queryCodesQuery(
DatabaseExecutor connection,
String kanji,
) => connection.query(
KANJIDICTableNames.queryCode,
where: 'kanji = ?',
whereArgs: [kanji],
);
Future<List<Map<String, Object?>>> _radicalsQuery(
DatabaseExecutor connection,
String kanji,
) => connection.rawQuery(
'''
SELECT DISTINCT
"XREF__KANJIDIC_Radical__RADKFILE"."radicalSymbol" AS "symbol",
"names"
FROM "${KANJIDICTableNames.radical}"
JOIN "XREF__KANJIDIC_Radical__RADKFILE" USING ("radicalId")
LEFT JOIN (
SELECT "radicalId", group_concat("name") AS "names"
FROM "${KANJIDICTableNames.radicalName}"
GROUP BY "radicalId"
) USING ("radicalId")
WHERE "${KANJIDICTableNames.radical}"."kanji" = ?
''',
[kanji],
);
Future<List<Map<String, Object?>>> _partsQuery(
DatabaseExecutor connection,
String kanji,
) => connection.query(
RADKFILETableNames.radkfile,
where: 'kanji = ?',
whereArgs: [kanji],
);
Future<List<Map<String, Object?>>> _readingsQuery(
DatabaseExecutor connection,
String kanji,
) => connection.query(
KANJIDICTableNames.reading,
where: 'kanji = ?',
whereArgs: [kanji],
);
Future<List<Map<String, Object?>>> _strokeMiscountsQuery(
DatabaseExecutor connection,
String kanji,
) => connection.query(
KANJIDICTableNames.strokeMiscount,
where: 'kanji = ?',
whereArgs: [kanji],
);
// Future<List<Map<String, Object?>>> _variantsQuery(
// DatabaseExecutor connection,
// String kanji,
// ) => connection.query(
// KANJIDICTableNames.variant,
// where: 'kanji = ?',
// whereArgs: [kanji],
// );
/// Searches for a kanji character and returns its details, or null if the kanji is not found in the database.
Future<KanjiSearchResult?> searchKanjiWithDbConnection(
DatabaseExecutor connection,
String kanji,
) async {
late final List<Map<String, Object?>> characters;
final characters_query = connection.query(
KANJIDICTableNames.character,
where: "literal = ?",
whereArgs: [kanji],
);
late final List<Map<String, Object?>> codepoints;
final codepoints_query = connection.query(
KANJIDICTableNames.codepoint,
where: "kanji = ?",
whereArgs: [kanji],
);
late final List<Map<String, Object?>> kunyomis;
final kunyomis_query = connection.query(
KANJIDICTableNames.kunyomi,
where: "kanji = ?",
whereArgs: [kanji],
orderBy: "orderNum",
);
late final List<Map<String, Object?>> onyomis;
final onyomis_query = connection.query(
KANJIDICTableNames.onyomi,
where: "kanji = ?",
whereArgs: [kanji],
orderBy: "orderNum",
);
late final List<Map<String, Object?>> meanings;
final meanings_query = connection.query(
KANJIDICTableNames.meaning,
where: "kanji = ? AND language = ?",
whereArgs: [kanji, 'eng'],
orderBy: "orderNum",
);
late final List<Map<String, Object?>> nanoris;
final nanoris_query = connection.query(
KANJIDICTableNames.nanori,
where: "kanji = ?",
whereArgs: [kanji],
);
late final List<Map<String, Object?>> dictionary_references;
final dictionary_references_query = connection.query(
KANJIDICTableNames.dictionaryReference,
where: "kanji = ?",
whereArgs: [kanji],
);
late final List<Map<String, Object?>> query_codes;
final query_codes_query = connection.query(
KANJIDICTableNames.queryCode,
where: "kanji = ?",
whereArgs: [kanji],
);
late final List<Map<String, Object?>> dictionaryReferences;
late final List<Map<String, Object?>> queryCodes;
late final List<Map<String, Object?>> radicals;
final radicals_query = connection.rawQuery(
'''
SELECT DISTINCT
"XREF__KANJIDIC_Radical__RADKFILE"."radicalSymbol" AS "symbol",
"names"
FROM "${KANJIDICTableNames.radical}"
JOIN "XREF__KANJIDIC_Radical__RADKFILE" USING ("radicalId")
LEFT JOIN (
SELECT "radicalId", group_concat("name") AS "names"
FROM "${KANJIDICTableNames.radicalName}"
GROUP BY "radicalId"
) USING ("radicalId")
WHERE "${KANJIDICTableNames.radical}"."kanji" = ?
''',
[kanji],
);
late final List<Map<String, Object?>> parts;
final parts_query = connection.query(
RADKFILETableNames.radkfile,
where: "kanji = ?",
whereArgs: [kanji],
);
late final List<Map<String, Object?>> readings;
final readings_query = connection.query(
KANJIDICTableNames.reading,
where: "kanji = ?",
whereArgs: [kanji],
);
late final List<Map<String, Object?>> stroke_miscounts;
final stroke_miscounts_query = connection.query(
KANJIDICTableNames.strokeMiscount,
where: "kanji = ?",
whereArgs: [kanji],
);
late final List<Map<String, Object?>> strokeMiscounts;
// TODO: add variant data to result
// late final List<Map<String, Object?>> variants;
// final variants_query = connection.query(
// KANJIDICTableNames.variant,
// where: "kanji = ?",
// whereArgs: [kanji],
// );
// TODO: Search for kunyomi and onyomi usage of the characters
// from JMDict. We'll need to fuzzy aquery JMDict_KanjiElement for mathces,
// filter JMdict_ReadingElement for kunyomi/onyomi, and then sort the main entry
// by JLPT, news frequency, etc.
// TODO: Search for kunyomi and onyomi usage of the characters
// from JMDict. We'll need to fuzzy aquery JMDict_KanjiElement for matches,
// filter JMdict_ReadingElement for kunyomi/onyomi, and then sort the main entry
// by JLPT, news frequency, etc.
await characters_query.then((value) => characters = value);
await _charactersQuery(connection, kanji).then((value) => characters = value);
if (characters.isEmpty) {
return null;
}
await Future.wait({
codepoints_query.then((value) => codepoints = value),
kunyomis_query.then((value) => kunyomis = value),
onyomis_query.then((value) => onyomis = value),
meanings_query.then((value) => meanings = value),
nanoris_query.then((value) => nanoris = value),
dictionary_references_query.then((value) => dictionary_references = value),
query_codes_query.then((value) => query_codes = value),
radicals_query.then((value) => radicals = value),
parts_query.then((value) => parts = value),
readings_query.then((value) => readings = value),
stroke_miscounts_query.then((value) => stroke_miscounts = value),
_codepointsQuery(connection, kanji).then((value) => codepoints = value),
_kunyomisQuery(connection, kanji).then((value) => kunyomis = value),
_onyomisQuery(connection, kanji).then((value) => onyomis = value),
_meaningsQuery(connection, kanji).then((value) => meanings = value),
_nanorisQuery(connection, kanji).then((value) => nanoris = value),
_dictionaryReferencesQuery(
connection,
kanji,
).then((value) => dictionaryReferences = value),
_queryCodesQuery(connection, kanji).then((value) => queryCodes = value),
_radicalsQuery(connection, kanji).then((value) => radicals = value),
_partsQuery(connection, kanji).then((value) => parts = value),
_readingsQuery(connection, kanji).then((value) => readings = value),
_strokeMiscountsQuery(
connection,
kanji,
).then((value) => strokeMiscounts = value),
// variants_query.then((value) => variants = value),
});
@@ -156,9 +203,7 @@ Future<KanjiSearchResult?> searchKanjiWithDbConnection(
: null;
final alternativeLanguageReadings = readings
.groupListsBy(
(item) => item['type'] as String,
)
.groupListsBy((item) => item['type'] as String)
.map(
(key, value) => MapEntry(
key,
@@ -167,20 +212,16 @@ Future<KanjiSearchResult?> searchKanjiWithDbConnection(
);
// TODO: Add `SKIPMisclassification` to the entries
final queryCodes = query_codes
.groupListsBy(
(item) => item['type'] as String,
)
final queryCodes_ = queryCodes
.groupListsBy((item) => item['type'] as String)
.map(
(key, value) => MapEntry(
key,
value.map((item) => item['code'] as String).toList(),
),
(key, value) =>
MapEntry(key, value.map((item) => item['code'] as String).toList()),
);
// TODO: Add `volume` and `page` to the entries
final dictionaryReferences = {
for (final entry in dictionary_references)
final dictionaryReferences_ = {
for (final entry in dictionaryReferences)
entry['type'] as String: entry['ref'] as String,
};
@@ -209,9 +250,33 @@ Future<KanjiSearchResult?> searchKanjiWithDbConnection(
},
nanori: nanoris.map((item) => item['nanori'] as String).toList(),
alternativeLanguageReadings: alternativeLanguageReadings,
strokeMiscounts:
stroke_miscounts.map((item) => item['strokeCount'] as int).toList(),
queryCodes: queryCodes,
dictionaryReferences: dictionaryReferences,
strokeMiscounts: strokeMiscounts
.map((item) => item['strokeCount'] as int)
.toList(),
queryCodes: queryCodes_,
dictionaryReferences: dictionaryReferences_,
);
}
// TODO: Use fewer queries with `IN` clauses to reduce the number of queries
/// Searches for multiple kanji at once, returning a map of kanji to their search results.
Future<Map<String, KanjiSearchResult>> searchManyKanjiWithDbConnection(
DatabaseExecutor connection,
Set<String> kanji,
) async {
if (kanji.isEmpty) {
return {};
}
final results = <String, KanjiSearchResult>{};
for (final k in kanji) {
final result = await searchKanjiWithDbConnection(connection, k);
if (result != null) {
results[k] = result;
}
}
return results;
}

View File

@@ -3,10 +3,16 @@ import 'package:sqflite_common/sqlite_api.dart';
// TODO: validate that the list of radicals all are valid radicals
/// Returns a list of radicals that are part of any kanji that contains all of the input radicals.
///
/// This can be used to limit the choices of additional radicals provided to a user,
/// so that any choice they make will still yield at least one kanji.
Future<List<String>> searchRemainingRadicalsWithDbConnection(
DatabaseExecutor connection,
List<String> radicals,
) async {
final distinctRadicals = radicals.toSet();
final queryResult = await connection.rawQuery(
'''
SELECT DISTINCT "radical"
@@ -14,39 +20,37 @@ Future<List<String>> searchRemainingRadicalsWithDbConnection(
WHERE "kanji" IN (
SELECT "kanji"
FROM "${RADKFILETableNames.radkfile}"
WHERE "radical" IN (${List.filled(radicals.length, '?').join(',')})
WHERE "radical" IN (${List.filled(distinctRadicals.length, '?').join(',')})
GROUP BY "kanji"
HAVING COUNT(DISTINCT "radical") = ?
)
''',
[
...radicals,
radicals.length,
],
[...distinctRadicals, distinctRadicals.length],
);
final remainingRadicals =
queryResult.map((row) => row['radical'] as String).toList();
final remainingRadicals = queryResult
.map((row) => row['radical'] as String)
.toList();
return remainingRadicals;
}
/// Returns a list of kanji that contain all of the input radicals.
Future<List<String>> searchKanjiByRadicalsWithDbConnection(
DatabaseExecutor connection,
List<String> radicals,
) async {
final distinctRadicals = radicals.toSet();
final queryResult = await connection.rawQuery(
'''
SELECT "kanji"
FROM "${RADKFILETableNames.radkfile}"
WHERE "radical" IN (${List.filled(radicals.length, '?').join(',')})
WHERE "radical" IN (${List.filled(distinctRadicals.length, '?').join(',')})
GROUP BY "kanji"
HAVING COUNT(DISTINCT "radical") = ?
''',
[
...radicals,
radicals.length,
],
[...distinctRadicals, distinctRadicals.length],
);
final kanji = queryResult.map((row) => row['kanji'] as String).toList();

View File

@@ -1,6 +1,5 @@
import 'package:jadb/table_names/jmdict.dart';
import 'package:jadb/table_names/tanos_jlpt.dart';
import 'package:jadb/util/sqlite_utils.dart';
import 'package:sqflite_common/sqflite.dart';
class LinearWordQueryData {
@@ -25,6 +24,9 @@ class LinearWordQueryData {
final List<Map<String, Object?>> readingElementRestrictions;
final List<Map<String, Object?>> kanjiElementInfos;
final LinearWordQueryData? senseAntonymData;
final LinearWordQueryData? senseSeeAlsoData;
const LinearWordQueryData({
required this.senses,
required this.readingElements,
@@ -46,245 +48,368 @@ class LinearWordQueryData {
required this.readingElementInfos,
required this.readingElementRestrictions,
required this.kanjiElementInfos,
required this.senseAntonymData,
required this.senseSeeAlsoData,
});
}
Future<LinearWordQueryData> fetchLinearWordQueryData(
Future<List<Map<String, Object?>>> _sensesQuery(
DatabaseExecutor connection,
List<int> entryIds,
) async {
) => connection.query(
JMdictTableNames.sense,
where: 'entryId IN (${List.filled(entryIds.length, '?').join(',')})',
whereArgs: entryIds,
);
Future<List<Map<String, Object?>>> _readingelementsQuery(
DatabaseExecutor connection,
List<int> entryIds,
) => connection.query(
JMdictTableNames.readingElement,
where: 'entryId IN (${List.filled(entryIds.length, '?').join(',')})',
whereArgs: entryIds,
orderBy: 'orderNum',
);
Future<List<Map<String, Object?>>> _kanjielementsQuery(
DatabaseExecutor connection,
List<int> entryIds,
) => connection.query(
JMdictTableNames.kanjiElement,
where: 'entryId IN (${List.filled(entryIds.length, '?').join(',')})',
whereArgs: entryIds,
orderBy: 'orderNum',
);
Future<List<Map<String, Object?>>> _jlpttagsQuery(
DatabaseExecutor connection,
List<int> entryIds,
) => connection.query(
TanosJLPTTableNames.jlptTag,
where: 'entryId IN (${List.filled(entryIds.length, '?').join(',')})',
whereArgs: entryIds,
);
Future<List<Map<String, Object?>>> _commonentriesQuery(
DatabaseExecutor connection,
List<int> entryIds,
) => connection.query(
'JMdict_EntryCommon',
where: 'entryId IN (${List.filled(entryIds.length, '?').join(',')})',
whereArgs: entryIds,
);
// Sense queries
Future<List<Map<String, Object?>>> _senseantonymsQuery(
DatabaseExecutor connection,
List<int> senseIds,
) => connection.rawQuery(
"""
SELECT
"${JMdictTableNames.senseAntonyms}".senseId,
"${JMdictTableNames.senseAntonyms}".ambiguous,
"${JMdictTableNames.senseAntonyms}".xrefEntryId,
"JMdict_BaseAndFurigana"."base",
"JMdict_BaseAndFurigana"."furigana"
FROM "${JMdictTableNames.senseAntonyms}"
JOIN "JMdict_BaseAndFurigana"
ON "${JMdictTableNames.senseAntonyms}"."xrefEntryId" = "JMdict_BaseAndFurigana"."entryId"
WHERE
"senseId" IN (${List.filled(senseIds.length, '?').join(',')})
AND "JMdict_BaseAndFurigana"."isFirst"
ORDER BY
"${JMdictTableNames.senseAntonyms}"."senseId",
"${JMdictTableNames.senseAntonyms}"."xrefEntryId"
""",
[...senseIds],
);
Future<List<Map<String, Object?>>> _senseseealsosQuery(
DatabaseExecutor connection,
List<int> senseIds,
) => connection.rawQuery(
"""
SELECT
"${JMdictTableNames.senseSeeAlso}"."senseId",
"${JMdictTableNames.senseSeeAlso}"."ambiguous",
"${JMdictTableNames.senseSeeAlso}"."xrefEntryId",
"JMdict_BaseAndFurigana"."base",
"JMdict_BaseAndFurigana"."furigana"
FROM "${JMdictTableNames.senseSeeAlso}"
JOIN "JMdict_BaseAndFurigana"
ON "${JMdictTableNames.senseSeeAlso}"."xrefEntryId" = "JMdict_BaseAndFurigana"."entryId"
WHERE
"senseId" IN (${List.filled(senseIds.length, '?').join(',')})
AND "JMdict_BaseAndFurigana"."isFirst"
ORDER BY
"${JMdictTableNames.senseSeeAlso}"."senseId",
"${JMdictTableNames.senseSeeAlso}"."xrefEntryId"
""",
[...senseIds],
);
Future<List<Map<String, Object?>>> _sensedialectsQuery(
DatabaseExecutor connection,
List<int> senseIds,
) => connection.query(
JMdictTableNames.senseDialect,
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
Future<List<Map<String, Object?>>> _sensefieldsQuery(
DatabaseExecutor connection,
List<int> senseIds,
) => connection.query(
JMdictTableNames.senseField,
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
Future<List<Map<String, Object?>>> _senseglossariesQuery(
DatabaseExecutor connection,
List<int> senseIds,
) => connection.query(
JMdictTableNames.senseGlossary,
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
Future<List<Map<String, Object?>>> _senseinfosQuery(
DatabaseExecutor connection,
List<int> senseIds,
) => connection.query(
JMdictTableNames.senseInfo,
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
Future<List<Map<String, Object?>>> _senselanguagesourcesQuery(
DatabaseExecutor connection,
List<int> senseIds,
) => connection.query(
JMdictTableNames.senseLanguageSource,
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
Future<List<Map<String, Object?>>> _sensemiscsQuery(
DatabaseExecutor connection,
List<int> senseIds,
) => connection.query(
JMdictTableNames.senseMisc,
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
Future<List<Map<String, Object?>>> _sensepossQuery(
DatabaseExecutor connection,
List<int> senseIds,
) => connection.query(
JMdictTableNames.sensePOS,
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
Future<List<Map<String, Object?>>> _senserestrictedtokanjisQuery(
DatabaseExecutor connection,
List<int> senseIds,
) => connection.query(
JMdictTableNames.senseRestrictedToKanji,
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
Future<List<Map<String, Object?>>> _senserestrictedtoreadingsQuery(
DatabaseExecutor connection,
List<int> senseIds,
) => connection.query(
JMdictTableNames.senseRestrictedToReading,
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
Future<List<Map<String, Object?>>> _examplesentencesQuery(
DatabaseExecutor connection,
List<int> senseIds,
) => connection.query(
'JMdict_ExampleSentence',
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
// Reading/kanji elements queries
Future<List<Map<String, Object?>>> _readingelementinfosQuery(
DatabaseExecutor connection,
List<int> readingIds,
) => connection.query(
JMdictTableNames.readingInfo,
where: '(elementId) IN (${List.filled(readingIds.length, '?').join(',')})',
whereArgs: readingIds,
);
Future<List<Map<String, Object?>>> _readingelementrestrictionsQuery(
DatabaseExecutor connection,
List<int> readingIds,
) => connection.query(
JMdictTableNames.readingRestriction,
where: '(elementId) IN (${List.filled(readingIds.length, '?').join(',')})',
whereArgs: readingIds,
);
Future<List<Map<String, Object?>>> _kanjielementinfosQuery(
DatabaseExecutor connection,
List<int> kanjiIds,
) => connection.query(
JMdictTableNames.kanjiInfo,
where: '(elementId) IN (${List.filled(kanjiIds.length, '?').join(',')})',
whereArgs: kanjiIds,
);
// Xref queries
Future<LinearWordQueryData?> _senseantonymdataQuery(
DatabaseExecutor connection,
List<int> entryIds,
) => fetchLinearWordQueryData(connection, entryIds, fetchXrefData: false);
Future<LinearWordQueryData?> _senseseealsodataQuery(
DatabaseExecutor connection,
List<int> entryIds,
) => fetchLinearWordQueryData(connection, entryIds, fetchXrefData: false);
// Full query
Future<LinearWordQueryData> fetchLinearWordQueryData(
DatabaseExecutor connection,
List<int> entryIds, {
bool fetchXrefData = true,
}) async {
late final List<Map<String, Object?>> senses;
final Future<List<Map<String, Object?>>> senses_query = connection.query(
JMdictTableNames.sense,
where: 'entryId IN (${List.filled(entryIds.length, '?').join(',')})',
whereArgs: entryIds,
);
late final List<Map<String, Object?>> readingElements;
final Future<List<Map<String, Object?>>> readingElements_query =
connection.query(
JMdictTableNames.readingElement,
where: 'entryId IN (${List.filled(entryIds.length, '?').join(',')})',
whereArgs: entryIds,
orderBy: 'orderNum',
);
late final List<Map<String, Object?>> kanjiElements;
final Future<List<Map<String, Object?>>> kanjiElements_query =
connection.query(
JMdictTableNames.kanjiElement,
where: 'entryId IN (${List.filled(entryIds.length, '?').join(',')})',
whereArgs: entryIds,
orderBy: 'orderNum',
);
late final List<Map<String, Object?>> jlptTags;
final Future<List<Map<String, Object?>>> jlptTags_query = connection.query(
TanosJLPTTableNames.jlptTag,
where: 'entryId IN (${List.filled(entryIds.length, '?').join(',')})',
whereArgs: entryIds,
);
late final List<Map<String, Object?>> commonEntries;
final Future<List<Map<String, Object?>>> commonEntries_query =
connection.query(
'JMdict_EntryCommon',
where: 'entryId IN (${List.filled(entryIds.length, '?').join(',')})',
whereArgs: entryIds,
);
await Future.wait([
senses_query.then((value) => senses = value),
readingElements_query.then((value) => readingElements = value),
kanjiElements_query.then((value) => kanjiElements = value),
jlptTags_query.then((value) => jlptTags = value),
commonEntries_query.then((value) => commonEntries = value),
_sensesQuery(connection, entryIds).then((value) => senses = value),
_readingelementsQuery(
connection,
entryIds,
).then((value) => readingElements = value),
_kanjielementsQuery(
connection,
entryIds,
).then((value) => kanjiElements = value),
_jlpttagsQuery(connection, entryIds).then((value) => jlptTags = value),
_commonentriesQuery(
connection,
entryIds,
).then((value) => commonEntries = value),
]);
// Sense queries
final senseIds = senses.map((sense) => sense['senseId'] as int).toList();
late final List<Map<String, Object?>> senseAntonyms;
final Future<List<Map<String, Object?>>> senseAntonyms_query =
connection.rawQuery(
"""
SELECT
"${JMdictTableNames.senseAntonyms}".senseId,
"${JMdictTableNames.senseAntonyms}".ambiguous,
"${JMdictTableNames.senseAntonyms}".xrefEntryId,
"JMdict_BaseAndFurigana"."base",
"JMdict_BaseAndFurigana"."furigana"
FROM "${JMdictTableNames.senseAntonyms}"
JOIN "JMdict_BaseAndFurigana"
ON "${JMdictTableNames.senseAntonyms}"."xrefEntryId" = "JMdict_BaseAndFurigana"."entryId"
WHERE
"senseId" IN (${List.filled(senseIds.length, '?').join(',')})
AND "JMdict_BaseAndFurigana"."isFirst"
ORDER BY
"${JMdictTableNames.senseAntonyms}"."senseId",
"${JMdictTableNames.senseAntonyms}"."xrefEntryId"
""",
[...senseIds],
);
late final List<Map<String, Object?>> senseDialects;
final Future<List<Map<String, Object?>>> senseDialects_query =
connection.query(
JMdictTableNames.senseDialect,
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
late final List<Map<String, Object?>> senseFields;
final Future<List<Map<String, Object?>>> senseFields_query = connection.query(
JMdictTableNames.senseField,
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
late final List<Map<String, Object?>> senseGlossaries;
final Future<List<Map<String, Object?>>> senseGlossaries_query =
connection.query(
JMdictTableNames.senseGlossary,
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
late final List<Map<String, Object?>> senseInfos;
final Future<List<Map<String, Object?>>> senseInfos_query = connection.query(
JMdictTableNames.senseInfo,
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
late final List<Map<String, Object?>> senseLanguageSources;
final Future<List<Map<String, Object?>>> senseLanguageSources_query =
connection.query(
JMdictTableNames.senseLanguageSource,
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
late final List<Map<String, Object?>> senseMiscs;
final Future<List<Map<String, Object?>>> senseMiscs_query = connection.query(
JMdictTableNames.senseMisc,
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
late final List<Map<String, Object?>> sensePOSs;
final Future<List<Map<String, Object?>>> sensePOSs_query = connection.query(
JMdictTableNames.sensePOS,
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
late final List<Map<String, Object?>> senseRestrictedToKanjis;
final Future<List<Map<String, Object?>>> senseRestrictedToKanjis_query =
connection.query(
JMdictTableNames.senseRestrictedToKanji,
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
late final List<Map<String, Object?>> senseRestrictedToReadings;
final Future<List<Map<String, Object?>>> senseRestrictedToReadings_query =
connection.query(
JMdictTableNames.senseRestrictedToReading,
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
late final List<Map<String, Object?>> senseSeeAlsos;
final Future<List<Map<String, Object?>>> senseSeeAlsos_query =
connection.rawQuery(
"""
SELECT
"${JMdictTableNames.senseSeeAlso}"."senseId",
"${JMdictTableNames.senseSeeAlso}"."ambiguous",
"${JMdictTableNames.senseSeeAlso}"."xrefEntryId",
"JMdict_BaseAndFurigana"."base",
"JMdict_BaseAndFurigana"."furigana"
FROM "${JMdictTableNames.senseSeeAlso}"
JOIN "JMdict_BaseAndFurigana"
ON "${JMdictTableNames.senseSeeAlso}"."xrefEntryId" = "JMdict_BaseAndFurigana"."entryId"
WHERE
"senseId" IN (${List.filled(senseIds.length, '?').join(',')})
AND "JMdict_BaseAndFurigana"."isFirst"
ORDER BY
"${JMdictTableNames.senseSeeAlso}"."senseId",
"${JMdictTableNames.senseSeeAlso}"."xrefEntryId"
""",
[...senseIds],
);
late final List<Map<String, Object?>> exampleSentences;
final Future<List<Map<String, Object?>>> exampleSentences_query =
connection.query(
'JMdict_ExampleSentence',
where: 'senseId IN (${List.filled(senseIds.length, '?').join(',')})',
whereArgs: senseIds,
);
// Reading queries
final readingIds = readingElements
.map((element) => (
element['entryId'] as int,
escapeStringValue(element['reading'] as String)
))
.map((element) => element['elementId'] as int)
.toList();
final kanjiIds = kanjiElements
.map((element) => element['elementId'] as int)
.toList();
late final List<Map<String, Object?>> readingElementInfos;
final Future<List<Map<String, Object?>>> readingElementInfos_query =
connection.query(
JMdictTableNames.readingInfo,
where: '(entryId, reading) IN (${readingIds.join(',')})',
);
late final List<Map<String, Object?>> readingElementRestrictions;
final Future<List<Map<String, Object?>>> readingElementRestrictions_query =
connection.query(
JMdictTableNames.readingRestriction,
where: '(entryId, reading) IN (${readingIds.join(',')})',
);
// Kanji queries
final kanjiIds = kanjiElements
.map((element) => (
element['entryId'] as int,
escapeStringValue(element['reading'] as String)
))
.toList();
late final List<Map<String, Object?>> kanjiElementInfos;
final Future<List<Map<String, Object?>>> kanjiElementInfos_query =
connection.query(
JMdictTableNames.kanjiInfo,
where: '(entryId, reading) IN (${kanjiIds.join(',')})',
);
// Xref data queries
await Future.wait([
_senseantonymsQuery(
connection,
senseIds,
).then((value) => senseAntonyms = value),
_senseseealsosQuery(
connection,
senseIds,
).then((value) => senseSeeAlsos = value),
]);
LinearWordQueryData? senseAntonymData;
LinearWordQueryData? senseSeeAlsoData;
await Future.wait([
senseAntonyms_query.then((value) => senseAntonyms = value),
senseDialects_query.then((value) => senseDialects = value),
senseFields_query.then((value) => senseFields = value),
senseGlossaries_query.then((value) => senseGlossaries = value),
senseInfos_query.then((value) => senseInfos = value),
senseLanguageSources_query.then((value) => senseLanguageSources = value),
senseMiscs_query.then((value) => senseMiscs = value),
sensePOSs_query.then((value) => sensePOSs = value),
senseRestrictedToKanjis_query
.then((value) => senseRestrictedToKanjis = value),
senseRestrictedToReadings_query
.then((value) => senseRestrictedToReadings = value),
senseSeeAlsos_query.then((value) => senseSeeAlsos = value),
exampleSentences_query.then((value) => exampleSentences = value),
readingElementInfos_query.then((value) => readingElementInfos = value),
readingElementRestrictions_query
.then((value) => readingElementRestrictions = value),
kanjiElementInfos_query.then((value) => kanjiElementInfos = value),
_sensedialectsQuery(
connection,
senseIds,
).then((value) => senseDialects = value),
_sensefieldsQuery(
connection,
senseIds,
).then((value) => senseFields = value),
_senseglossariesQuery(
connection,
senseIds,
).then((value) => senseGlossaries = value),
_senseinfosQuery(connection, senseIds).then((value) => senseInfos = value),
_senselanguagesourcesQuery(
connection,
senseIds,
).then((value) => senseLanguageSources = value),
_sensemiscsQuery(connection, senseIds).then((value) => senseMiscs = value),
_sensepossQuery(connection, senseIds).then((value) => sensePOSs = value),
_senserestrictedtokanjisQuery(
connection,
senseIds,
).then((value) => senseRestrictedToKanjis = value),
_senserestrictedtoreadingsQuery(
connection,
senseIds,
).then((value) => senseRestrictedToReadings = value),
_examplesentencesQuery(
connection,
senseIds,
).then((value) => exampleSentences = value),
_readingelementinfosQuery(
connection,
readingIds,
).then((value) => readingElementInfos = value),
_readingelementrestrictionsQuery(
connection,
readingIds,
).then((value) => readingElementRestrictions = value),
_kanjielementinfosQuery(
connection,
kanjiIds,
).then((value) => kanjiElementInfos = value),
if (fetchXrefData)
_senseantonymdataQuery(
connection,
senseAntonyms.map((antonym) => antonym['xrefEntryId'] as int).toList(),
).then((value) => senseAntonymData = value),
if (fetchXrefData)
_senseseealsodataQuery(
connection,
senseSeeAlsos.map((seeAlso) => seeAlso['xrefEntryId'] as int).toList(),
).then((value) => senseSeeAlsoData = value),
]);
return LinearWordQueryData(
@@ -308,5 +433,7 @@ Future<LinearWordQueryData> fetchLinearWordQueryData(
readingElementInfos: readingElementInfos,
readingElementRestrictions: readingElementRestrictions,
kanjiElementInfos: kanjiElementInfos,
senseAntonymData: senseAntonymData,
senseSeeAlsoData: senseSeeAlsoData,
);
}

View File

@@ -1,5 +1,5 @@
import 'package:jadb/table_names/jmdict.dart';
import 'package:jadb/search/word_search/word_search.dart';
import 'package:jadb/table_names/jmdict.dart';
import 'package:jadb/util/text_filtering.dart';
import 'package:sqflite_common/sqlite_api.dart';
@@ -15,15 +15,15 @@ SearchMode _determineSearchMode(String word) {
final bool containsAscii = RegExp(r'[A-Za-z]').hasMatch(word);
if (containsKanji && containsAscii) {
return SearchMode.MixedKanji;
return SearchMode.mixedKanji;
} else if (containsKanji) {
return SearchMode.Kanji;
return SearchMode.kanji;
} else if (containsAscii) {
return SearchMode.English;
return SearchMode.english;
} else if (word.contains(hiraganaRegex) || word.contains(katakanaRegex)) {
return SearchMode.Kana;
return SearchMode.kana;
} else {
return SearchMode.MixedKana;
return SearchMode.mixedKana;
}
}
@@ -37,91 +37,105 @@ String _filterFTSSensitiveCharacters(String word) {
.replaceAll('(', '')
.replaceAll(')', '')
.replaceAll('^', '')
.replaceAll('\"', '');
.replaceAll('"', '');
}
(String, List<Object?>) _kanjiReadingTemplate(
String tableName,
String word, {
int pageSize = 10,
int? pageSize,
int? offset,
bool countOnly = false,
}) =>
(
'''
}) {
assert(
tableName == JMdictTableNames.kanjiElement ||
tableName == JMdictTableNames.readingElement,
);
assert(!countOnly || pageSize == null);
assert(!countOnly || offset == null);
assert(pageSize == null || pageSize > 0);
assert(offset == null || offset >= 0);
assert(
offset == null || pageSize != null,
'Offset should only be used with pageSize set',
);
return (
'''
WITH
fts_results AS (
SELECT DISTINCT
"${tableName}FTS"."entryId",
"$tableName"."entryId",
100
+ (("${tableName}FTS"."reading" = ?) * 50)
+ (("${tableName}FTS"."reading" = ?) * 10000)
+ "JMdict_EntryScore"."score"
AS "score"
FROM "${tableName}FTS"
JOIN "${tableName}" USING ("entryId", "reading")
JOIN "JMdict_EntryScore" USING ("entryId", "reading")
JOIN "$tableName" USING ("elementId")
JOIN "JMdict_EntryScore" USING ("elementId")
WHERE "${tableName}FTS"."reading" MATCH ? || '*'
AND "JMdict_EntryScore"."type" = '${tableName == JMdictTableNames.kanjiElement ? 'kanji' : 'reading'}'
ORDER BY
"JMdict_EntryScore"."score" DESC
${!countOnly ? 'LIMIT ?' : ''}
AND "JMdict_EntryScore"."type" = '${tableName == JMdictTableNames.kanjiElement ? 'k' : 'r'}'
),
non_fts_results AS (
SELECT DISTINCT
"${tableName}"."entryId",
"$tableName"."entryId",
50
+ "JMdict_EntryScore"."score"
AS "score"
FROM "${tableName}"
JOIN "JMdict_EntryScore" USING ("entryId", "reading")
FROM "$tableName"
JOIN "JMdict_EntryScore" USING ("elementId")
WHERE "reading" LIKE '%' || ? || '%'
AND "entryId" NOT IN (SELECT "entryId" FROM "fts_results")
AND "JMdict_EntryScore"."type" = '${tableName == JMdictTableNames.kanjiElement ? 'kanji' : 'reading'}'
ORDER BY
"JMdict_EntryScore"."score" DESC,
"${tableName}"."entryId" ASC
${!countOnly ? 'LIMIT ?' : ''}
AND "$tableName"."entryId" NOT IN (SELECT "entryId" FROM "fts_results")
AND "JMdict_EntryScore"."type" = '${tableName == JMdictTableNames.kanjiElement ? 'k' : 'r'}'
)
${countOnly ? 'SELECT COUNT("entryId") AS count' : 'SELECT "entryId", "score"'}
SELECT ${countOnly ? 'COUNT(DISTINCT "entryId") AS count' : '"entryId", MAX("score") AS "score"'}
FROM (
SELECT * FROM fts_results
UNION ALL
SELECT * FROM non_fts_results
SELECT * FROM "fts_results"
UNION
SELECT * FROM "non_fts_results"
)
${!countOnly ? 'GROUP BY "entryId"' : ''}
${!countOnly ? 'ORDER BY "score" DESC, "entryId" ASC' : ''}
${pageSize != null ? 'LIMIT ?' : ''}
${offset != null ? 'OFFSET ?' : ''}
'''
.trim(),
[
_filterFTSSensitiveCharacters(word),
_filterFTSSensitiveCharacters(word),
if (!countOnly) pageSize,
_filterFTSSensitiveCharacters(word),
if (!countOnly) pageSize,
]
);
.trim(),
[
_filterFTSSensitiveCharacters(word),
_filterFTSSensitiveCharacters(word),
_filterFTSSensitiveCharacters(word),
?pageSize,
?offset,
],
);
}
Future<List<ScoredEntryId>> _queryKanji(
DatabaseExecutor connection,
String word,
int pageSize,
int? pageSize,
int? offset,
) {
final (query, args) = _kanjiReadingTemplate(
JMdictTableNames.kanjiElement,
word,
pageSize: pageSize,
offset: offset,
);
return connection.rawQuery(query, args).then((result) => result
.map((row) => ScoredEntryId(
row['entryId'] as int,
row['score'] as int,
))
.toList());
return connection
.rawQuery(query, args)
.then(
(result) => result
.map(
(row) =>
ScoredEntryId(row['entryId'] as int, row['score'] as int),
)
.toList(),
);
}
Future<int> _queryKanjiCount(
DatabaseExecutor connection,
String word,
) {
Future<int> _queryKanjiCount(DatabaseExecutor connection, String word) {
final (query, args) = _kanjiReadingTemplate(
JMdictTableNames.kanjiElement,
word,
@@ -129,32 +143,34 @@ Future<int> _queryKanjiCount(
);
return connection
.rawQuery(query, args)
.then((result) => result.first['count'] as int);
.then((result) => result.firstOrNull?['count'] as int? ?? 0);
}
Future<List<ScoredEntryId>> _queryKana(
DatabaseExecutor connection,
String word,
int pageSize,
int? pageSize,
int? offset,
) {
final (query, args) = _kanjiReadingTemplate(
JMdictTableNames.readingElement,
word,
pageSize: pageSize,
offset: offset,
);
return connection.rawQuery(query, args).then((result) => result
.map((row) => ScoredEntryId(
row['entryId'] as int,
row['score'] as int,
))
.toList());
return connection
.rawQuery(query, args)
.then(
(result) => result
.map(
(row) =>
ScoredEntryId(row['entryId'] as int, row['score'] as int),
)
.toList(),
);
}
Future<int> _queryKanaCount(
DatabaseExecutor connection,
String word,
) {
Future<int> _queryKanaCount(DatabaseExecutor connection, String word) {
final (query, args) = _kanjiReadingTemplate(
JMdictTableNames.readingElement,
word,
@@ -162,71 +178,62 @@ Future<int> _queryKanaCount(
);
return connection
.rawQuery(query, args)
.then((result) => result.first['count'] as int);
.then((result) => result.firstOrNull?['count'] as int? ?? 0);
}
Future<List<ScoredEntryId>> _queryEnglish(
DatabaseExecutor connection,
String word,
int pageSize,
int? pageSize,
int? offset,
) async {
assert(pageSize == null || pageSize > 0);
assert(offset == null || offset >= 0);
assert(
offset == null || pageSize != null,
'Offset should only be used with pageSize set',
);
final result = await connection.rawQuery(
'''
SELECT
"${JMdictTableNames.sense}"."entryId",
MAX("JMdict_EntryScore"."score")
+ (("${JMdictTableNames.senseGlossary}"."phrase" = ? AND "${JMdictTableNames.sense}"."orderNum" = 1) * 50)
+ (("${JMdictTableNames.senseGlossary}"."phrase" = ? AND "${JMdictTableNames.sense}"."orderNum" = 2) * 30)
+ (("${JMdictTableNames.senseGlossary}"."phrase" = ?) * 20)
+ (("${JMdictTableNames.senseGlossary}"."phrase" = ?1 AND "${JMdictTableNames.sense}"."orderNum" = 1) * 50)
+ (("${JMdictTableNames.senseGlossary}"."phrase" = ?1 AND "${JMdictTableNames.sense}"."orderNum" = 2) * 30)
+ (("${JMdictTableNames.senseGlossary}"."phrase" = ?1) * 20)
as "score"
FROM "${JMdictTableNames.senseGlossary}"
JOIN "${JMdictTableNames.sense}" USING ("senseId")
JOIN "JMdict_EntryScore" USING ("entryId")
WHERE "${JMdictTableNames.senseGlossary}"."phrase" LIKE ?
WHERE "${JMdictTableNames.senseGlossary}"."phrase" LIKE ?2
GROUP BY "JMdict_EntryScore"."entryId"
ORDER BY
"score" DESC,
"${JMdictTableNames.sense}"."entryId" ASC
LIMIT ?
OFFSET ?
${pageSize != null ? 'LIMIT ?3' : ''}
${offset != null ? 'OFFSET ?4' : ''}
'''
.trim(),
[
word,
word,
word,
'%${word.replaceAll('%', '')}%',
pageSize,
offset,
],
[word, '%${word.replaceAll('%', '')}%', if (pageSize != null) pageSize, if (offset != null) offset],
);
return result
.map((row) => ScoredEntryId(
row['entryId'] as int,
row['score'] as int,
))
.map((row) => ScoredEntryId(row['entryId'] as int, row['score'] as int))
.toList();
}
Future<int> _queryEnglishCount(
DatabaseExecutor connection,
String word,
) async {
Future<int> _queryEnglishCount(DatabaseExecutor connection, String word) async {
final result = await connection.rawQuery(
'''
SELECT
COUNT(DISTINCT "${JMdictTableNames.sense}"."entryId") AS "count"
FROM "${JMdictTableNames.senseGlossary}"
JOIN "${JMdictTableNames.sense}" USING ("senseId")
WHERE "${JMdictTableNames.senseGlossary}"."phrase" LIKE ?
'''
SELECT
COUNT(DISTINCT "${JMdictTableNames.sense}"."entryId") AS "count"
FROM "${JMdictTableNames.senseGlossary}"
JOIN "${JMdictTableNames.sense}" USING ("senseId")
WHERE "${JMdictTableNames.senseGlossary}"."phrase" LIKE ?
'''
.trim(),
[
'%$word%',
],
['%$word%'],
);
return result.first['count'] as int;
@@ -236,55 +243,34 @@ Future<List<ScoredEntryId>> fetchEntryIds(
DatabaseExecutor connection,
String word,
SearchMode searchMode,
int pageSize,
int? pageSize,
int? offset,
) async {
if (searchMode == SearchMode.Auto) {
if (searchMode == SearchMode.auto) {
searchMode = _determineSearchMode(word);
}
assert(
word.isNotEmpty,
'Word should not be empty when fetching entry IDs',
);
assert(word.isNotEmpty, 'Word should not be empty when fetching entry IDs');
late final List<ScoredEntryId> entryIds;
switch (searchMode) {
case SearchMode.Kanji:
entryIds = await _queryKanji(
connection,
word,
pageSize,
offset,
);
case SearchMode.kanji:
entryIds = await _queryKanji(connection, word, pageSize, offset);
break;
case SearchMode.Kana:
entryIds = await _queryKana(
connection,
word,
pageSize,
offset,
);
case SearchMode.kana:
entryIds = await _queryKana(connection, word, pageSize, offset);
break;
case SearchMode.English:
entryIds = await _queryEnglish(
connection,
word,
pageSize,
offset,
);
case SearchMode.english:
entryIds = await _queryEnglish(connection, word, pageSize, offset);
break;
case SearchMode.MixedKana:
case SearchMode.MixedKanji:
case SearchMode.mixedKana:
case SearchMode.mixedKanji:
default:
throw UnimplementedError(
'Search mode $searchMode is not implemented',
);
throw UnimplementedError('Search mode $searchMode is not implemented');
}
;
return entryIds;
}
@@ -294,45 +280,31 @@ Future<int?> fetchEntryIdCount(
String word,
SearchMode searchMode,
) async {
if (searchMode == SearchMode.Auto) {
if (searchMode == SearchMode.auto) {
searchMode = _determineSearchMode(word);
}
assert(
word.isNotEmpty,
'Word should not be empty when fetching entry IDs',
);
assert(word.isNotEmpty, 'Word should not be empty when fetching entry IDs');
late final int? entryIdCount;
switch (searchMode) {
case SearchMode.Kanji:
entryIdCount = await _queryKanjiCount(
connection,
word,
);
case SearchMode.kanji:
entryIdCount = await _queryKanjiCount(connection, word);
break;
case SearchMode.Kana:
entryIdCount = await _queryKanaCount(
connection,
word,
);
case SearchMode.kana:
entryIdCount = await _queryKanaCount(connection, word);
break;
case SearchMode.English:
entryIdCount = await _queryEnglishCount(
connection,
word,
);
case SearchMode.english:
entryIdCount = await _queryEnglishCount(connection, word);
break;
case SearchMode.MixedKana:
case SearchMode.MixedKanji:
case SearchMode.mixedKana:
case SearchMode.mixedKanji:
default:
throw UnimplementedError(
'Search mode $searchMode is not implemented',
);
throw UnimplementedError('Search mode $searchMode is not implemented');
}
return entryIdCount;

View File

@@ -12,50 +12,37 @@ import 'package:jadb/models/word_search/word_search_sense.dart';
import 'package:jadb/models/word_search/word_search_sense_language_source.dart';
import 'package:jadb/models/word_search/word_search_sources.dart';
import 'package:jadb/models/word_search/word_search_xref_entry.dart';
import 'package:jadb/search/word_search/data_query.dart';
import 'package:jadb/search/word_search/entry_id_query.dart';
List<WordSearchResult> regroupWordSearchResults({
required List<ScoredEntryId> entryIds,
required List<Map<String, Object?>> readingElements,
required List<Map<String, Object?>> kanjiElements,
required List<Map<String, Object?>> jlptTags,
required List<Map<String, Object?>> commonEntries,
required List<Map<String, Object?>> senses,
required List<Map<String, Object?>> senseAntonyms,
required List<Map<String, Object?>> senseDialects,
required List<Map<String, Object?>> senseFields,
required List<Map<String, Object?>> senseGlossaries,
required List<Map<String, Object?>> senseInfos,
required List<Map<String, Object?>> senseLanguageSources,
required List<Map<String, Object?>> senseMiscs,
required List<Map<String, Object?>> sensePOSs,
required List<Map<String, Object?>> senseRestrictedToKanjis,
required List<Map<String, Object?>> senseRestrictedToReadings,
required List<Map<String, Object?>> senseSeeAlsos,
required List<Map<String, Object?>> exampleSentences,
required List<Map<String, Object?>> readingElementInfos,
required List<Map<String, Object?>> readingElementRestrictions,
required List<Map<String, Object?>> kanjiElementInfos,
required LinearWordQueryData linearWordQueryData,
}) {
final List<WordSearchResult> results = [];
final commonEntryIds =
commonEntries.map((entry) => entry['entryId'] as int).toSet();
final commonEntryIds = linearWordQueryData.commonEntries
.map((entry) => entry['entryId'] as int)
.toSet();
for (final scoredEntryId in entryIds) {
final List<Map<String, Object?>> entryReadingElements = readingElements
final List<Map<String, Object?>> entryReadingElements = linearWordQueryData
.readingElements
.where((element) => element['entryId'] == scoredEntryId.entryId)
.toList();
final List<Map<String, Object?>> entryKanjiElements = kanjiElements
final List<Map<String, Object?>> entryKanjiElements = linearWordQueryData
.kanjiElements
.where((element) => element['entryId'] == scoredEntryId.entryId)
.toList();
final List<Map<String, Object?>> entryJlptTags = jlptTags
final List<Map<String, Object?>> entryJlptTags = linearWordQueryData
.jlptTags
.where((element) => element['entryId'] == scoredEntryId.entryId)
.toList();
final jlptLevel = entryJlptTags
final jlptLevel =
entryJlptTags
.map((e) => JlptLevel.fromString(e['jlptLevel'] as String?))
.sorted((a, b) => b.compareTo(a))
.firstOrNull ??
@@ -63,33 +50,36 @@ List<WordSearchResult> regroupWordSearchResults({
final isCommon = commonEntryIds.contains(scoredEntryId.entryId);
final List<Map<String, Object?>> entrySenses = senses
final List<Map<String, Object?>> entrySenses = linearWordQueryData.senses
.where((element) => element['entryId'] == scoredEntryId.entryId)
.toList();
final GroupedWordResult entryReadingElementsGrouped = _regroup_words(
final GroupedWordResult entryReadingElementsGrouped = _regroupWords(
entryId: scoredEntryId.entryId,
readingElements: entryReadingElements,
kanjiElements: entryKanjiElements,
readingElementInfos: readingElementInfos,
readingElementRestrictions: readingElementRestrictions,
kanjiElementInfos: kanjiElementInfos,
readingElementInfos: linearWordQueryData.readingElementInfos,
readingElementRestrictions:
linearWordQueryData.readingElementRestrictions,
kanjiElementInfos: linearWordQueryData.kanjiElementInfos,
);
final List<WordSearchSense> entrySensesGrouped = _regroup_senses(
final List<WordSearchSense> entrySensesGrouped = _regroupSenses(
senses: entrySenses,
senseAntonyms: senseAntonyms,
senseDialects: senseDialects,
senseFields: senseFields,
senseGlossaries: senseGlossaries,
senseInfos: senseInfos,
senseLanguageSources: senseLanguageSources,
senseMiscs: senseMiscs,
sensePOSs: sensePOSs,
senseRestrictedToKanjis: senseRestrictedToKanjis,
senseRestrictedToReadings: senseRestrictedToReadings,
senseSeeAlsos: senseSeeAlsos,
exampleSentences: exampleSentences,
senseAntonyms: linearWordQueryData.senseAntonyms,
senseDialects: linearWordQueryData.senseDialects,
senseFields: linearWordQueryData.senseFields,
senseGlossaries: linearWordQueryData.senseGlossaries,
senseInfos: linearWordQueryData.senseInfos,
senseLanguageSources: linearWordQueryData.senseLanguageSources,
senseMiscs: linearWordQueryData.senseMiscs,
sensePOSs: linearWordQueryData.sensePOSs,
senseRestrictedToKanjis: linearWordQueryData.senseRestrictedToKanjis,
senseRestrictedToReadings: linearWordQueryData.senseRestrictedToReadings,
senseSeeAlsos: linearWordQueryData.senseSeeAlsos,
exampleSentences: linearWordQueryData.exampleSentences,
senseSeeAlsosXrefData: linearWordQueryData.senseSeeAlsoData,
senseAntonymsXrefData: linearWordQueryData.senseAntonymData,
);
results.add(
@@ -102,10 +92,7 @@ List<WordSearchResult> regroupWordSearchResults({
readingInfo: entryReadingElementsGrouped.readingInfos,
senses: entrySensesGrouped,
jlptLevel: jlptLevel,
sources: const WordSearchSources(
jmdict: true,
jmnedict: false,
),
sources: const WordSearchSources(jmdict: true, jmnedict: false),
),
);
}
@@ -125,7 +112,7 @@ class GroupedWordResult {
});
}
GroupedWordResult _regroup_words({
GroupedWordResult _regroupWords({
required int entryId,
required List<Map<String, Object?>> kanjiElements,
required List<Map<String, Object?>> kanjiElementInfos,
@@ -135,8 +122,9 @@ GroupedWordResult _regroup_words({
}) {
final List<WordSearchRuby> rubys = [];
final kanjiElements_ =
kanjiElements.where((element) => element['entryId'] == entryId).toList();
final kanjiElements_ = kanjiElements
.where((element) => element['entryId'] == entryId)
.toList();
final readingElements_ = readingElements
.where((element) => element['entryId'] == entryId)
@@ -148,9 +136,7 @@ GroupedWordResult _regroup_words({
for (final readingElement in readingElements_) {
if (readingElement['doesNotMatchKanji'] == 1 || kanjiElements_.isEmpty) {
final ruby = WordSearchRuby(
base: readingElement['reading'] as String,
);
final ruby = WordSearchRuby(base: readingElement['reading'] as String);
rubys.add(ruby);
continue;
@@ -169,34 +155,47 @@ GroupedWordResult _regroup_words({
continue;
}
final ruby = WordSearchRuby(
base: kanji,
furigana: reading,
);
final ruby = WordSearchRuby(base: kanji, furigana: reading);
rubys.add(ruby);
}
}
assert(
rubys.isNotEmpty,
'No readings found for entryId: $entryId',
);
assert(rubys.isNotEmpty, 'No readings found for entryId: $entryId');
final Map<int, String> readingElementIdsToReading = {
for (final element in readingElements_)
element['elementId'] as int: element['reading'] as String,
};
final Map<int, String> kanjiElementIdsToReading = {
for (final element in kanjiElements_)
element['elementId'] as int: element['reading'] as String,
};
final readingElementInfos_ = readingElementInfos
.where((element) => element['entryId'] == entryId)
.toList();
final kanjiElementInfos_ = kanjiElementInfos
.where((element) => element['entryId'] == entryId)
.toList();
return GroupedWordResult(
rubys: rubys,
readingInfos: {
for (final rei in readingElementInfos)
rei['reading'] as String:
for (final rei in readingElementInfos_)
readingElementIdsToReading[rei['elementId'] as int]!:
JMdictReadingInfo.fromId(rei['info'] as String),
},
kanjiInfos: {
for (final kei in kanjiElementInfos)
kei['reading'] as String: JMdictKanjiInfo.fromId(kei['info'] as String),
for (final kei in kanjiElementInfos_)
kanjiElementIdsToReading[kei['elementId'] as int]!:
JMdictKanjiInfo.fromId(kei['info'] as String),
},
);
}
List<WordSearchSense> _regroup_senses({
List<WordSearchSense> _regroupSenses({
required List<Map<String, Object?>> senses,
required List<Map<String, Object?>> senseAntonyms,
required List<Map<String, Object?>> senseDialects,
@@ -210,29 +209,41 @@ List<WordSearchSense> _regroup_senses({
required List<Map<String, Object?>> senseRestrictedToReadings,
required List<Map<String, Object?>> senseSeeAlsos,
required List<Map<String, Object?>> exampleSentences,
required LinearWordQueryData? senseSeeAlsosXrefData,
required LinearWordQueryData? senseAntonymsXrefData,
}) {
final groupedSenseAntonyms =
senseAntonyms.groupListsBy((element) => element['senseId'] as int);
final groupedSenseDialects =
senseDialects.groupListsBy((element) => element['senseId'] as int);
final groupedSenseFields =
senseFields.groupListsBy((element) => element['senseId'] as int);
final groupedSenseGlossaries =
senseGlossaries.groupListsBy((element) => element['senseId'] as int);
final groupedSenseInfos =
senseInfos.groupListsBy((element) => element['senseId'] as int);
final groupedSenseLanguageSources =
senseLanguageSources.groupListsBy((element) => element['senseId'] as int);
final groupedSenseMiscs =
senseMiscs.groupListsBy((element) => element['senseId'] as int);
final groupedSensePOSs =
sensePOSs.groupListsBy((element) => element['senseId'] as int);
final groupedSenseRestrictedToKanjis = senseRestrictedToKanjis
.groupListsBy((element) => element['senseId'] as int);
final groupedSenseAntonyms = senseAntonyms.groupListsBy(
(element) => element['senseId'] as int,
);
final groupedSenseDialects = senseDialects.groupListsBy(
(element) => element['senseId'] as int,
);
final groupedSenseFields = senseFields.groupListsBy(
(element) => element['senseId'] as int,
);
final groupedSenseGlossaries = senseGlossaries.groupListsBy(
(element) => element['senseId'] as int,
);
final groupedSenseInfos = senseInfos.groupListsBy(
(element) => element['senseId'] as int,
);
final groupedSenseLanguageSources = senseLanguageSources.groupListsBy(
(element) => element['senseId'] as int,
);
final groupedSenseMiscs = senseMiscs.groupListsBy(
(element) => element['senseId'] as int,
);
final groupedSensePOSs = sensePOSs.groupListsBy(
(element) => element['senseId'] as int,
);
final groupedSenseRestrictedToKanjis = senseRestrictedToKanjis.groupListsBy(
(element) => element['senseId'] as int,
);
final groupedSenseRestrictedToReadings = senseRestrictedToReadings
.groupListsBy((element) => element['senseId'] as int);
final groupedSenseSeeAlsos =
senseSeeAlsos.groupListsBy((element) => element['senseId'] as int);
final groupedSenseSeeAlsos = senseSeeAlsos.groupListsBy(
(element) => element['senseId'] as int,
);
final List<WordSearchSense> result = [];
for (final sense in senses) {
@@ -251,45 +262,82 @@ List<WordSearchSense> _regroup_senses({
groupedSenseRestrictedToReadings[senseId] ?? [];
final seeAlsos = groupedSenseSeeAlsos[senseId] ?? [];
final List<WordSearchResult> seeAlsosWordResults =
senseSeeAlsosXrefData != null
? regroupWordSearchResults(
entryIds: seeAlsos
.map((e) => ScoredEntryId(e['xrefEntryId'] as int, 0))
.toList(),
linearWordQueryData: senseSeeAlsosXrefData,
)
: [];
final List<WordSearchResult> antonymsWordResults =
senseAntonymsXrefData != null
? regroupWordSearchResults(
entryIds: antonyms
.map((e) => ScoredEntryId(e['xrefEntryId'] as int, 0))
.toList(),
linearWordQueryData: senseAntonymsXrefData,
)
: [];
final resultSense = WordSearchSense(
englishDefinitions: glossaries.map((e) => e['phrase'] as String).toList(),
partsOfSpeech:
pos.map((e) => JMdictPOS.fromId(e['pos'] as String)).toList(),
seeAlso: seeAlsos
.map((e) => WordSearchXrefEntry(
entryId: e['xrefEntryId'] as int,
baseWord: e['base'] as String,
furigana: e['furigana'] as String?,
ambiguous: e['ambiguous'] == 1,
))
partsOfSpeech: pos
.map((e) => JMdictPOS.fromId(e['pos'] as String))
.toList(),
antonyms: antonyms
.map((e) => WordSearchXrefEntry(
entryId: e['xrefEntryId'] as int,
baseWord: e['base'] as String,
furigana: e['furigana'] as String?,
ambiguous: e['ambiguous'] == 1,
))
seeAlso: seeAlsos.asMap().entries.map<WordSearchXrefEntry>((mapEntry) {
final i = mapEntry.key;
final e = mapEntry.value;
return WordSearchXrefEntry(
entryId: e['xrefEntryId'] as int,
baseWord: e['base'] as String,
furigana: e['furigana'] as String?,
ambiguous: e['ambiguous'] == 1,
xrefResult: seeAlsosWordResults.isNotEmpty
? seeAlsosWordResults[i]
: null,
);
}).toList(),
antonyms: antonyms.asMap().entries.map<WordSearchXrefEntry>((mapEntry) {
final i = mapEntry.key;
final e = mapEntry.value;
return WordSearchXrefEntry(
entryId: e['xrefEntryId'] as int,
baseWord: e['base'] as String,
furigana: e['furigana'] as String?,
ambiguous: e['ambiguous'] == 1,
xrefResult: antonymsWordResults.isNotEmpty
? antonymsWordResults[i]
: null,
);
}).toList(),
restrictedToReading: restrictedToReadings
.map((e) => e['reading'] as String)
.toList(),
restrictedToKanji: restrictedToKanjis
.map((e) => e['kanji'] as String)
.toList(),
fields: fields
.map((e) => JMdictField.fromId(e['field'] as String))
.toList(),
restrictedToReading:
restrictedToReadings.map((e) => e['reading'] as String).toList(),
restrictedToKanji:
restrictedToKanjis.map((e) => e['kanji'] as String).toList(),
fields:
fields.map((e) => JMdictField.fromId(e['field'] as String)).toList(),
dialects: dialects
.map((e) => JMdictDialect.fromId(e['dialect'] as String))
.toList(),
misc: miscs.map((e) => JMdictMisc.fromId(e['misc'] as String)).toList(),
info: infos.map((e) => e['info'] as String).toList(),
languageSource: languageSources
.map((e) => WordSearchSenseLanguageSource(
language: e['language'] as String,
phrase: e['phrase'] as String?,
fullyDescribesSense: e['fullyDescribesSense'] == 1,
constructedFromSmallerWords:
e['constructedFromSmallerWords'] == 1,
))
.map(
(e) => WordSearchSenseLanguageSource(
language: e['language'] as String,
phrase: e['phrase'] as String?,
fullyDescribesSense: e['fullyDescribesSense'] == 1,
constructedFromSmallerWords:
e['constructedFromSmallerWords'] == 1,
),
)
.toList(),
);

View File

@@ -14,26 +14,38 @@ import 'package:jadb/table_names/jmdict.dart';
import 'package:sqflite_common/sqlite_api.dart';
enum SearchMode {
Auto,
English,
Kanji,
MixedKanji,
Kana,
MixedKana,
/// Try to autodetect what is being searched for
auto,
/// Search for english words
english,
/// Search for the kanji reading of a word
kanji,
/// Search for the kanji reading of a word, mixed in with kana/romaji
mixedKanji,
/// Search for the kana reading of a word
kana,
/// Search for the kana reading of a word, mixed in with romaji
mixedKana,
}
/// Searches for an input string, returning a list of results with their details. Returns null if the input string is empty.
Future<List<WordSearchResult>?> searchWordWithDbConnection(
DatabaseExecutor connection,
String word,
SearchMode searchMode,
int page,
int pageSize,
) async {
String word, {
SearchMode searchMode = SearchMode.auto,
int page = 0,
int? pageSize,
}) async {
if (word.isEmpty) {
return null;
}
final offset = page * pageSize;
final int? offset = pageSize != null ? page * pageSize : null;
final List<ScoredEntryId> entryIds = await fetchEntryIds(
connection,
word,
@@ -43,47 +55,34 @@ Future<List<WordSearchResult>?> searchWordWithDbConnection(
);
if (entryIds.isEmpty) {
// TODO: try conjugation search
return [];
}
final LinearWordQueryData linearWordQueryData =
await fetchLinearWordQueryData(
connection,
entryIds.map((e) => e.entryId).toList(),
);
connection,
entryIds.map((e) => e.entryId).toList(),
);
final result = regroupWordSearchResults(
entryIds: entryIds,
readingElements: linearWordQueryData.readingElements,
kanjiElements: linearWordQueryData.kanjiElements,
jlptTags: linearWordQueryData.jlptTags,
commonEntries: linearWordQueryData.commonEntries,
senses: linearWordQueryData.senses,
senseAntonyms: linearWordQueryData.senseAntonyms,
senseDialects: linearWordQueryData.senseDialects,
senseFields: linearWordQueryData.senseFields,
senseGlossaries: linearWordQueryData.senseGlossaries,
senseInfos: linearWordQueryData.senseInfos,
senseLanguageSources: linearWordQueryData.senseLanguageSources,
senseMiscs: linearWordQueryData.senseMiscs,
sensePOSs: linearWordQueryData.sensePOSs,
senseRestrictedToKanjis: linearWordQueryData.senseRestrictedToKanjis,
senseRestrictedToReadings: linearWordQueryData.senseRestrictedToReadings,
senseSeeAlsos: linearWordQueryData.senseSeeAlsos,
exampleSentences: linearWordQueryData.exampleSentences,
readingElementInfos: linearWordQueryData.readingElementInfos,
readingElementRestrictions: linearWordQueryData.readingElementRestrictions,
kanjiElementInfos: linearWordQueryData.kanjiElementInfos,
linearWordQueryData: linearWordQueryData,
);
for (final resultEntry in result) {
resultEntry.inferMatchSpans(word, searchMode: searchMode);
}
return result;
}
/// Searches for an input string, returning the amount of results that the search would yield without pagination.
Future<int?> searchWordCountWithDbConnection(
DatabaseExecutor connection,
String word,
SearchMode searchMode,
) async {
String word, {
SearchMode searchMode = SearchMode.auto,
}) async {
if (word.isEmpty) {
return null;
}
@@ -97,6 +96,7 @@ Future<int?> searchWordCountWithDbConnection(
return entryIdCount;
}
/// Fetches a single word by its entry ID, returning null if not found.
Future<WordSearchResult?> getWordByIdWithDbConnection(
DatabaseExecutor connection,
int id,
@@ -105,43 +105,23 @@ Future<WordSearchResult?> getWordByIdWithDbConnection(
return null;
}
final exists = await connection.rawQuery(
'SELECT EXISTS(SELECT 1 FROM "${JMdictTableNames.entry}" WHERE "entryId" = ?)',
[id],
).then((value) => value.isNotEmpty && value.first.values.first == 1);
final exists = await connection
.rawQuery(
'SELECT EXISTS(SELECT 1 FROM "${JMdictTableNames.entry}" WHERE "entryId" = ?)',
[id],
)
.then((value) => value.isNotEmpty && value.first.values.first == 1);
if (!exists) {
return null;
}
final LinearWordQueryData linearWordQueryData =
await fetchLinearWordQueryData(
connection,
[id],
);
await fetchLinearWordQueryData(connection, [id]);
final result = regroupWordSearchResults(
entryIds: [ScoredEntryId(id, 0)],
readingElements: linearWordQueryData.readingElements,
kanjiElements: linearWordQueryData.kanjiElements,
jlptTags: linearWordQueryData.jlptTags,
commonEntries: linearWordQueryData.commonEntries,
senses: linearWordQueryData.senses,
senseAntonyms: linearWordQueryData.senseAntonyms,
senseDialects: linearWordQueryData.senseDialects,
senseFields: linearWordQueryData.senseFields,
senseGlossaries: linearWordQueryData.senseGlossaries,
senseInfos: linearWordQueryData.senseInfos,
senseLanguageSources: linearWordQueryData.senseLanguageSources,
senseMiscs: linearWordQueryData.senseMiscs,
sensePOSs: linearWordQueryData.sensePOSs,
senseRestrictedToKanjis: linearWordQueryData.senseRestrictedToKanjis,
senseRestrictedToReadings: linearWordQueryData.senseRestrictedToReadings,
senseSeeAlsos: linearWordQueryData.senseSeeAlsos,
exampleSentences: linearWordQueryData.exampleSentences,
readingElementInfos: linearWordQueryData.readingElementInfos,
readingElementRestrictions: linearWordQueryData.readingElementRestrictions,
kanjiElementInfos: linearWordQueryData.kanjiElementInfos,
linearWordQueryData: linearWordQueryData,
);
assert(
@@ -151,3 +131,27 @@ Future<WordSearchResult?> getWordByIdWithDbConnection(
return result.firstOrNull;
}
/// Fetches multiple words by their entry IDs, returning a map from entry ID to result.
Future<Map<int, WordSearchResult>> getWordsByIdsWithDbConnection(
DatabaseExecutor connection,
Set<int> ids,
) async {
if (ids.isEmpty) {
return {};
}
final LinearWordQueryData linearWordQueryData =
await fetchLinearWordQueryData(connection, ids.toList());
final List<ScoredEntryId> entryIds = ids
.map((id) => ScoredEntryId(id, 0)) // Score is not used here
.toList();
final results = regroupWordSearchResults(
entryIds: entryIds,
linearWordQueryData: linearWordQueryData,
);
return {for (var r in results) r.entryId: r};
}

View File

@@ -1,4 +1,5 @@
abstract class JMdictTableNames {
static const String version = 'JMdict_Version';
static const String entry = 'JMdict_Entry';
static const String kanjiElement = 'JMdict_KanjiElement';
static const String kanjiInfo = 'JMdict_KanjiElementInfo';
@@ -20,23 +21,24 @@ abstract class JMdictTableNames {
static const String senseSeeAlso = 'JMdict_SenseSeeAlso';
static Set<String> get allTables => {
entry,
kanjiElement,
kanjiInfo,
readingElement,
readingInfo,
readingRestriction,
sense,
senseAntonyms,
senseDialect,
senseField,
senseGlossary,
senseInfo,
senseMisc,
sensePOS,
senseLanguageSource,
senseRestrictedToKanji,
senseRestrictedToReading,
senseSeeAlso
};
version,
entry,
kanjiElement,
kanjiInfo,
readingElement,
readingInfo,
readingRestriction,
sense,
senseAntonyms,
senseDialect,
senseField,
senseGlossary,
senseInfo,
senseMisc,
sensePOS,
senseLanguageSource,
senseRestrictedToKanji,
senseRestrictedToReading,
senseSeeAlso,
};
}

View File

@@ -1,4 +1,5 @@
abstract class KANJIDICTableNames {
static const String version = 'KANJIDIC_Version';
static const String character = 'KANJIDIC_Character';
static const String radicalName = 'KANJIDIC_RadicalName';
static const String codepoint = 'KANJIDIC_Codepoint';
@@ -17,19 +18,20 @@ abstract class KANJIDICTableNames {
static const String nanori = 'KANJIDIC_Nanori';
static Set<String> get allTables => {
character,
radicalName,
codepoint,
radical,
strokeMiscount,
variant,
dictionaryReference,
dictionaryReferenceMoro,
queryCode,
reading,
kunyomi,
onyomi,
meaning,
nanori
};
version,
character,
radicalName,
codepoint,
radical,
strokeMiscount,
variant,
dictionaryReference,
dictionaryReferenceMoro,
queryCode,
reading,
kunyomi,
onyomi,
meaning,
nanori,
};
}

View File

@@ -0,0 +1,9 @@
abstract class KanjiVGTableNames {
static const String version = 'KanjiVG_Version';
static const String entry = 'KanjiVG_Entry';
static const String path = 'KanjiVG_Path';
static const String strokeNumber = 'KanjiVG_StrokeNumber';
static const String pathGroup = 'KanjiVG_PathGroup';
static Set<String> get allTables => {version, entry, path, strokeNumber, pathGroup};
}

View File

@@ -1,7 +1,6 @@
abstract class RADKFILETableNames {
static const String version = 'RADKFILE_Version';
static const String radkfile = 'RADKFILE';
static Set<String> get allTables => {
radkfile,
};
static Set<String> get allTables => {version, radkfile};
}

View File

@@ -1,5 +1,6 @@
abstract class TanosJLPTTableNames {
static const String version = 'JMdict_JLPT_Version';
static const String jlptTag = 'JMdict_JLPTTag';
static Set<String> get allTables => {jlptTag};
static Set<String> get allTables => {version, jlptTag};
}

View File

@@ -276,29 +276,22 @@ extension on DateTime {
/// See more info here:
/// - https://en.wikipedia.org/wiki/Nanboku-ch%C5%8D_period
/// - http://www.kumamotokokufu-h.ed.jp/kumamoto/bungaku/nengoui.html
String? japaneseEra({bool nanbokuchouPeriodUsesNorth = true}) {
String? japaneseEra() {
throw UnimplementedError('This function is not implemented yet.');
if (this.year < 645) {
if (year < 645) {
return null;
}
if (this.year < periodsNanbokuchouNorth.keys.first.$1) {
if (year < periodsNanbokuchouNorth.keys.first.$1) {
// TODO: find first where year <= this.year and jump one period back.
}
}
String get japaneseWeekdayPrefix => [
'',
'',
'',
'',
'',
'',
'',
][weekday - 1];
String get japaneseWeekdayPrefix =>
['', '', '', '', '', '', ''][weekday - 1];
/// Returns the date in Japanese format.
String japaneseDate({bool showWeekday = false}) =>
'$month月$day日' + (showWeekday ? '$japaneseWeekdayPrefix' : '');
'$month月$day日${showWeekday ? '$japaneseWeekdayPrefix' : ''}';
}

View File

@@ -1,3 +1,4 @@
import 'package:collection/collection.dart';
import 'package:jadb/util/lemmatizer/rules.dart';
enum WordClass {
@@ -10,18 +11,17 @@ enum WordClass {
adverb,
particle,
input,
// TODO: add toString and fromString so it can be parsed by the cli
}
enum LemmatizationRuleType {
prefix,
suffix,
}
enum LemmatizationRuleType { prefix, suffix }
class LemmatizationRule {
final String name;
final AllomorphPattern pattern;
final WordClass wordClass;
final List<WordClass>? validChildClasses;
final Set<WordClass>? validChildClasses;
final bool terminal;
const LemmatizationRule({
@@ -41,23 +41,44 @@ class LemmatizationRule {
required String pattern,
required String? replacement,
required WordClass wordClass,
validChildClasses,
terminal = false,
lookAheadBehind = const [''],
Set<WordClass>? validChildClasses,
bool terminal = false,
List<Pattern> lookAheadBehind = const [''],
LemmatizationRuleType type = LemmatizationRuleType.suffix,
}) : this(
name: name,
pattern: AllomorphPattern(
patterns: {
pattern: replacement != null ? [replacement] : null
},
type: type,
lookAheadBehind: lookAheadBehind,
),
validChildClasses: validChildClasses,
terminal: terminal,
wordClass: wordClass,
);
name: name,
pattern: AllomorphPattern(
patterns: {
pattern: replacement != null ? [replacement] : null,
},
type: type,
lookAheadBehind: lookAheadBehind,
),
validChildClasses: validChildClasses,
terminal: terminal,
wordClass: wordClass,
);
@override
int get hashCode => Object.hash(
name,
pattern,
wordClass,
validChildClasses,
terminal,
SetEquality().hash(validChildClasses),
);
@override
bool operator ==(Object other) {
if (identical(this, other)) return true;
return other is LemmatizationRule &&
other.name == name &&
other.pattern == pattern &&
other.wordClass == wordClass &&
other.terminal == terminal &&
SetEquality().equals(validChildClasses, other.validChildClasses);
}
}
/// Represents a set of patterns for matching allomorphs in a word.
@@ -74,6 +95,7 @@ class AllomorphPattern {
this.lookAheadBehind = const [''],
});
/// Convert the [patterns] into regexes
List<(String, Pattern)> get allPatternCombinations {
final combinations = <(String, Pattern)>[];
for (final l in lookAheadBehind) {
@@ -97,6 +119,7 @@ class AllomorphPattern {
return combinations;
}
/// Check whether an input string matches any of the [patterns]
bool matches(String word) {
for (final (_, p) in allPatternCombinations) {
if (p is String) {
@@ -114,6 +137,9 @@ class AllomorphPattern {
return false;
}
/// Apply the replacement for this pattern.
///
/// If none of the [patterns] apply, this function returns `null`.
List<String>? apply(String word) {
for (final (affix, p) in allPatternCombinations) {
switch ((type, p is RegExp)) {
@@ -132,8 +158,8 @@ class AllomorphPattern {
if (word.startsWith(p as String)) {
return patterns[affix] != null
? patterns[affix]!
.map((s) => s + word.substring(affix.length))
.toList()
.map((s) => s + word.substring(affix.length))
.toList()
: [word.substring(affix.length)];
}
break;
@@ -160,6 +186,22 @@ class AllomorphPattern {
}
return null;
}
@override
int get hashCode => Object.hash(
type,
ListEquality().hash(lookAheadBehind),
MapEquality().hash(patterns),
);
@override
bool operator ==(Object other) {
if (identical(this, other)) return true;
return other is AllomorphPattern &&
other.type == type &&
ListEquality().equals(other.lookAheadBehind, lookAheadBehind) &&
MapEquality().equals(other.patterns, patterns);
}
}
class Lemmatized {
@@ -186,7 +228,7 @@ class Lemmatized {
@override
String toString() {
final childrenString = children
.map((c) => ' - ' + c.toString().split('\n').join('\n '))
.map((c) => ' - ${c.toString().split('\n').join('\n ')}')
.join('\n');
if (children.isEmpty) {
@@ -206,9 +248,10 @@ List<Lemmatized> _lemmatize(LemmatizationRule parentRule, String word) {
final filteredLemmatizationRules = parentRule.validChildClasses == null
? lemmatizationRules
: lemmatizationRules.where(
(r) => parentRule.validChildClasses!.contains(r.wordClass),
);
: [
for (final wordClass in parentRule.validChildClasses!)
...lemmatizationRulesByWordClass[wordClass]!,
];
for (final rule in filteredLemmatizationRules) {
if (rule.matches(word)) {
@@ -239,9 +282,6 @@ Lemmatized lemmatize(String word) {
return Lemmatized(
original: word,
rule: inputRule,
children: _lemmatize(
inputRule,
word,
),
children: _lemmatize(inputRule, word),
);
}

View File

@@ -1,10 +1,17 @@
import 'package:jadb/util/lemmatizer/lemmatizer.dart';
import 'package:jadb/util/lemmatizer/rules/godan-verbs.dart';
import 'package:jadb/util/lemmatizer/rules/i-adjectives.dart';
import 'package:jadb/util/lemmatizer/rules/ichidan-verbs.dart';
import 'package:jadb/util/lemmatizer/rules/godan_verbs.dart';
import 'package:jadb/util/lemmatizer/rules/i_adjectives.dart';
import 'package:jadb/util/lemmatizer/rules/ichidan_verbs.dart';
List<LemmatizationRule> lemmatizationRules = [
final List<LemmatizationRule> lemmatizationRules = List.unmodifiable([
...ichidanVerbLemmatizationRules,
...godanVerbLemmatizationRules,
...iAdjectiveLemmatizationRules,
];
]);
final Map<WordClass, List<LemmatizationRule>> lemmatizationRulesByWordClass =
Map.unmodifiable({
WordClass.ichidanVerb: ichidanVerbLemmatizationRules,
WordClass.iAdjective: iAdjectiveLemmatizationRules,
WordClass.godanVerb: godanVerbLemmatizationRules,
});

View File

@@ -1,457 +0,0 @@
import 'package:jadb/util/lemmatizer/lemmatizer.dart';
List<LemmatizationRule> godanVerbLemmatizationRules = [
LemmatizationRule(
name: 'Godan verb - base form',
terminal: true,
pattern: AllomorphPattern(
patterns: {
'': [''],
'': [''],
'': [''],
'': [''],
'': [''],
'': [''],
'': [''],
'': [''],
'': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - negative form',
pattern: AllomorphPattern(
patterns: {
'わない': [''],
'かない': [''],
'がない': [''],
'さない': [''],
'たない': [''],
'なない': [''],
'ばない': [''],
'まない': [''],
'らない': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - past form',
pattern: AllomorphPattern(
patterns: {
'した': [''],
'った': ['', '', ''],
'んだ': ['', '', ''],
'いだ': [''],
'いた': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - te-form',
pattern: AllomorphPattern(
patterns: {
'いて': ['', ''],
'して': [''],
'って': ['', '', ''],
'んで': ['', '', ''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - te-form with いる',
pattern: AllomorphPattern(
patterns: {
'いている': ['', ''],
'している': [''],
'っている': ['', '', ''],
'んでいる': ['', '', ''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - te-form with いた',
pattern: AllomorphPattern(
patterns: {
'いていた': ['', ''],
'していた': [''],
'っていた': ['', '', ''],
'んでいた': ['', '', ''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - conditional form',
pattern: AllomorphPattern(
patterns: {
'けば': [''],
'げば': [''],
'せば': [''],
'てば': ['', '', ''],
'ねば': [''],
'べば': [''],
'めば': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - volitional form',
pattern: AllomorphPattern(
patterns: {
'おう': [''],
'こう': [''],
'ごう': [''],
'そう': [''],
'とう': ['', '', ''],
'のう': [''],
'ぼう': [''],
'もう': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - potential form',
pattern: AllomorphPattern(
patterns: {
'ける': [''],
'げる': [''],
'せる': [''],
'てる': ['', '', ''],
'ねる': [''],
'べる': [''],
'める': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - passive form',
pattern: AllomorphPattern(
patterns: {
'かれる': [''],
'がれる': [''],
'される': [''],
'たれる': ['', '', ''],
'なれる': [''],
'ばれる': [''],
'まれる': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - causative form',
pattern: AllomorphPattern(
patterns: {
'かせる': [''],
'がせる': [''],
'させる': [''],
'たせる': ['', '', ''],
'なせる': [''],
'ばせる': [''],
'ませる': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - causative-passive form',
pattern: AllomorphPattern(
patterns: {
'かされる': [''],
'がされる': [''],
'される': [''],
'たされる': ['', '', ''],
'なされる': [''],
'ばされる': [''],
'まされる': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - imperative form',
pattern: AllomorphPattern(
patterns: {
'': [''],
'': [''],
'': [''],
'': [''],
'': ['', '', ''],
'': [''],
'': [''],
'': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - negative past form',
pattern: AllomorphPattern(
patterns: {
'わなかった': [''],
'かなかった': [''],
'がなかった': [''],
'さなかった': [''],
'たなかった': [''],
'ななかった': [''],
'ばなかった': [''],
'まなかった': [''],
'らなかった': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - negative te-form',
pattern: AllomorphPattern(
patterns: {
'わなくて': [''],
'かなくて': [''],
'がなくて': [''],
'さなくて': [''],
'たなくて': [''],
'ななくて': [''],
'ばなくて': [''],
'まなくて': [''],
'らなくて': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - negative conditional form',
pattern: AllomorphPattern(
patterns: {
'わなければ': [''],
'かなければ': [''],
'がなければ': [''],
'さなければ': [''],
'たなければ': [''],
'ななければ': [''],
'ばなければ': [''],
'まなければ': [''],
'らなければ': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - negative volitional form',
pattern: AllomorphPattern(
patterns: {
'うまい': [''],
'くまい': [''],
'ぐまい': [''],
'すまい': [''],
'つまい': ['', '', ''],
'ぬまい': [''],
'ぶまい': [''],
'むまい': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - negative potential form',
pattern: AllomorphPattern(
patterns: {
'けない': [''],
'げない': [''],
'せない': [''],
'てない': ['', '', ''],
'ねない': [''],
'べない': [''],
'めない': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - negative passive form',
pattern: AllomorphPattern(
patterns: {
'かれない': [''],
'がれない': [''],
'されない': [''],
'たれない': ['', '', ''],
'なれない': [''],
'ばれない': [''],
'まれない': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - negative causative form',
pattern: AllomorphPattern(
patterns: {
'かせない': [''],
'がせない': [''],
'させない': [''],
'たせない': ['', '', ''],
'なせない': [''],
'ばせない': [''],
'ませない': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - negative causative-passive form',
pattern: AllomorphPattern(
patterns: {
'かされない': [''],
'がされない': [''],
'されない': [''],
'たされない': ['', '', ''],
'なされない': [''],
'ばされない': [''],
'まされない': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - negative imperative form',
pattern: AllomorphPattern(
patterns: {
'うな': [''],
'くな': [''],
'ぐな': [''],
'すな': [''],
'つな': [''],
'ぬな': [''],
'ぶな': [''],
'むな': [''],
'るな': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - desire form',
pattern: AllomorphPattern(
patterns: {
'きたい': [''],
'ぎたい': [''],
'したい': [''],
'ちたい': [''],
'にたい': [''],
'びたい': [''],
'みたい': [''],
'りたい': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - negative desire form',
pattern: AllomorphPattern(
patterns: {
'いたくない': [''],
'きたくない': [''],
'ぎたくない': [''],
'したくない': [''],
'ちたくない': [''],
'にたくない': [''],
'びたくない': [''],
'みたくない': [''],
'りたくない': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - past desire form',
pattern: AllomorphPattern(
patterns: {
'きたかった': [''],
'ぎたかった': [''],
'したかった': [''],
'ちたかった': [''],
'にたかった': [''],
'びたかった': [''],
'みたかった': [''],
'りたかった': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
LemmatizationRule(
name: 'Godan verb - negative past desire form',
pattern: AllomorphPattern(
patterns: {
'いたくなかった': [''],
'きたくなかった': [''],
'ぎたくなかった': [''],
'したくなかった': [''],
'ちたくなかった': [''],
'にたくなかった': [''],
'びたくなかった': [''],
'みたくなかった': [''],
'りたくなかった': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: [WordClass.godanVerb],
wordClass: WordClass.godanVerb,
),
];

View File

@@ -0,0 +1,509 @@
import 'package:jadb/util/lemmatizer/lemmatizer.dart';
final LemmatizationRule godanVerbBase = LemmatizationRule(
name: 'Godan verb - base form',
terminal: true,
pattern: AllomorphPattern(
patterns: {
'': [''],
'': [''],
'': [''],
'': [''],
'': [''],
'': [''],
'': [''],
'': [''],
'': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbNegative = LemmatizationRule(
name: 'Godan verb - negative form',
pattern: AllomorphPattern(
patterns: {
'わない': [''],
'かない': [''],
'がない': [''],
'さない': [''],
'たない': [''],
'なない': [''],
'ばない': [''],
'まない': [''],
'らない': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbPast = LemmatizationRule(
name: 'Godan verb - past form',
pattern: AllomorphPattern(
patterns: {
'した': [''],
'った': ['', '', ''],
'んだ': ['', '', ''],
'いだ': [''],
'いた': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbTe = LemmatizationRule(
name: 'Godan verb - te-form',
pattern: AllomorphPattern(
patterns: {
'いて': ['', ''],
'して': [''],
'って': ['', '', ''],
'んで': ['', '', ''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbTeiru = LemmatizationRule(
name: 'Godan verb - te-form with いる',
pattern: AllomorphPattern(
patterns: {
'いている': ['', ''],
'している': [''],
'っている': ['', '', ''],
'んでいる': ['', '', ''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbTeita = LemmatizationRule(
name: 'Godan verb - te-form with いた',
pattern: AllomorphPattern(
patterns: {
'いていた': ['', ''],
'していた': [''],
'っていた': ['', '', ''],
'んでいた': ['', '', ''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbConditional = LemmatizationRule(
name: 'Godan verb - conditional form',
pattern: AllomorphPattern(
patterns: {
'けば': [''],
'げば': [''],
'せば': [''],
'てば': ['', '', ''],
'ねば': [''],
'べば': [''],
'めば': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbVolitional = LemmatizationRule(
name: 'Godan verb - volitional form',
pattern: AllomorphPattern(
patterns: {
'おう': [''],
'こう': [''],
'ごう': [''],
'そう': [''],
'とう': ['', '', ''],
'のう': [''],
'ぼう': [''],
'もう': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbPotential = LemmatizationRule(
name: 'Godan verb - potential form',
pattern: AllomorphPattern(
patterns: {
'ける': [''],
'げる': [''],
'せる': [''],
'てる': ['', '', ''],
'ねる': [''],
'べる': [''],
'める': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbPassive = LemmatizationRule(
name: 'Godan verb - passive form',
pattern: AllomorphPattern(
patterns: {
'かれる': [''],
'がれる': [''],
'される': [''],
'たれる': ['', '', ''],
'なれる': [''],
'ばれる': [''],
'まれる': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbCausative = LemmatizationRule(
name: 'Godan verb - causative form',
pattern: AllomorphPattern(
patterns: {
'かせる': [''],
'がせる': [''],
'させる': [''],
'たせる': ['', '', ''],
'なせる': [''],
'ばせる': [''],
'ませる': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbCausativePassive = LemmatizationRule(
name: 'Godan verb - causative-passive form',
pattern: AllomorphPattern(
patterns: {
'かされる': [''],
'がされる': [''],
'される': [''],
'たされる': ['', '', ''],
'なされる': [''],
'ばされる': [''],
'まされる': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbImperative = LemmatizationRule(
name: 'Godan verb - imperative form',
pattern: AllomorphPattern(
patterns: {
'': [''],
'': [''],
'': [''],
'': [''],
'': ['', '', ''],
'': [''],
'': [''],
'': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbNegativePast = LemmatizationRule(
name: 'Godan verb - negative past form',
pattern: AllomorphPattern(
patterns: {
'わなかった': [''],
'かなかった': [''],
'がなかった': [''],
'さなかった': [''],
'たなかった': [''],
'ななかった': [''],
'ばなかった': [''],
'まなかった': [''],
'らなかった': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbNegativeTe = LemmatizationRule(
name: 'Godan verb - negative te-form',
pattern: AllomorphPattern(
patterns: {
'わなくて': [''],
'かなくて': [''],
'がなくて': [''],
'さなくて': [''],
'たなくて': [''],
'ななくて': [''],
'ばなくて': [''],
'まなくて': [''],
'らなくて': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbNegativeConditional = LemmatizationRule(
name: 'Godan verb - negative conditional form',
pattern: AllomorphPattern(
patterns: {
'わなければ': [''],
'かなければ': [''],
'がなければ': [''],
'さなければ': [''],
'たなければ': [''],
'ななければ': [''],
'ばなければ': [''],
'まなければ': [''],
'らなければ': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbNegativeVolitional = LemmatizationRule(
name: 'Godan verb - negative volitional form',
pattern: AllomorphPattern(
patterns: {
'うまい': [''],
'くまい': [''],
'ぐまい': [''],
'すまい': [''],
'つまい': ['', '', ''],
'ぬまい': [''],
'ぶまい': [''],
'むまい': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbNegativePotential = LemmatizationRule(
name: 'Godan verb - negative potential form',
pattern: AllomorphPattern(
patterns: {
'けない': [''],
'げない': [''],
'せない': [''],
'てない': ['', '', ''],
'ねない': [''],
'べない': [''],
'めない': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbNegativePassive = LemmatizationRule(
name: 'Godan verb - negative passive form',
pattern: AllomorphPattern(
patterns: {
'かれない': [''],
'がれない': [''],
'されない': [''],
'たれない': ['', '', ''],
'なれない': [''],
'ばれない': [''],
'まれない': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbNegativeCausative = LemmatizationRule(
name: 'Godan verb - negative causative form',
pattern: AllomorphPattern(
patterns: {
'かせない': [''],
'がせない': [''],
'させない': [''],
'たせない': ['', '', ''],
'なせない': [''],
'ばせない': [''],
'ませない': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbNegativeCausativePassive = LemmatizationRule(
name: 'Godan verb - negative causative-passive form',
pattern: AllomorphPattern(
patterns: {
'かされない': [''],
'がされない': [''],
'されない': [''],
'たされない': ['', '', ''],
'なされない': [''],
'ばされない': [''],
'まされない': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbNegativeImperative = LemmatizationRule(
name: 'Godan verb - negative imperative form',
pattern: AllomorphPattern(
patterns: {
'うな': [''],
'くな': [''],
'ぐな': [''],
'すな': [''],
'つな': [''],
'ぬな': [''],
'ぶな': [''],
'むな': [''],
'るな': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbDesire = LemmatizationRule(
name: 'Godan verb - desire form',
pattern: AllomorphPattern(
patterns: {
'きたい': [''],
'ぎたい': [''],
'したい': [''],
'ちたい': [''],
'にたい': [''],
'びたい': [''],
'みたい': [''],
'りたい': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbNegativeDesire = LemmatizationRule(
name: 'Godan verb - negative desire form',
pattern: AllomorphPattern(
patterns: {
'いたくない': [''],
'きたくない': [''],
'ぎたくない': [''],
'したくない': [''],
'ちたくない': [''],
'にたくない': [''],
'びたくない': [''],
'みたくない': [''],
'りたくない': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbPastDesire = LemmatizationRule(
name: 'Godan verb - past desire form',
pattern: AllomorphPattern(
patterns: {
'きたかった': [''],
'ぎたかった': [''],
'したかった': [''],
'ちたかった': [''],
'にたかった': [''],
'びたかった': [''],
'みたかった': [''],
'りたかった': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final LemmatizationRule godanVerbNegativePastDesire = LemmatizationRule(
name: 'Godan verb - negative past desire form',
pattern: AllomorphPattern(
patterns: {
'いたくなかった': [''],
'きたくなかった': [''],
'ぎたくなかった': [''],
'したくなかった': [''],
'ちたくなかった': [''],
'にたくなかった': [''],
'びたくなかった': [''],
'みたくなかった': [''],
'りたくなかった': [''],
},
type: LemmatizationRuleType.suffix,
),
validChildClasses: {WordClass.godanVerb},
wordClass: WordClass.godanVerb,
);
final List<LemmatizationRule> godanVerbLemmatizationRules = List.unmodifiable([
godanVerbBase,
godanVerbNegative,
godanVerbPast,
godanVerbTe,
godanVerbTeiru,
godanVerbTeita,
godanVerbConditional,
godanVerbVolitional,
godanVerbPotential,
godanVerbPassive,
godanVerbCausative,
godanVerbCausativePassive,
godanVerbImperative,
godanVerbNegativePast,
godanVerbNegativeTe,
godanVerbNegativeConditional,
godanVerbNegativeVolitional,
godanVerbNegativePotential,
godanVerbNegativePassive,
godanVerbNegativeCausative,
godanVerbNegativeCausativePassive,
godanVerbNegativeImperative,
godanVerbDesire,
godanVerbNegativeDesire,
godanVerbPastDesire,
godanVerbNegativePastDesire,
]);

View File

@@ -1,61 +0,0 @@
import 'package:jadb/util/lemmatizer/lemmatizer.dart';
List<LemmatizationRule> iAdjectiveLemmatizationRules = [
LemmatizationRule.simple(
name: 'I adjective - base form',
terminal: true,
pattern: '',
replacement: '',
validChildClasses: [WordClass.iAdjective],
wordClass: WordClass.iAdjective,
),
LemmatizationRule.simple(
name: 'I adjective - negative form',
pattern: 'くない',
replacement: '',
validChildClasses: [WordClass.iAdjective],
wordClass: WordClass.iAdjective,
),
LemmatizationRule.simple(
name: 'I adjective - past form',
pattern: 'かった',
replacement: '',
validChildClasses: [WordClass.iAdjective],
wordClass: WordClass.iAdjective,
),
LemmatizationRule.simple(
name: 'I adjective - negative past form',
pattern: 'くなかった',
replacement: '',
validChildClasses: [WordClass.iAdjective],
wordClass: WordClass.iAdjective,
),
LemmatizationRule.simple(
name: 'I adjective - te-form',
pattern: 'くて',
replacement: '',
validChildClasses: [WordClass.iAdjective],
wordClass: WordClass.iAdjective,
),
LemmatizationRule.simple(
name: 'I adjective - conditional form',
pattern: 'ければ',
replacement: '',
validChildClasses: [WordClass.iAdjective],
wordClass: WordClass.iAdjective,
),
LemmatizationRule.simple(
name: 'I adjective - volitional form',
pattern: 'かろう',
replacement: '',
validChildClasses: [WordClass.iAdjective],
wordClass: WordClass.iAdjective,
),
LemmatizationRule.simple(
name: 'I adjective - continuative form',
pattern: '',
replacement: '',
validChildClasses: [WordClass.iAdjective],
wordClass: WordClass.iAdjective,
),
];

View File

@@ -0,0 +1,77 @@
import 'package:jadb/util/lemmatizer/lemmatizer.dart';
final LemmatizationRule iAdjectiveBase = LemmatizationRule.simple(
name: 'I adjective - base form',
terminal: true,
pattern: '',
replacement: '',
validChildClasses: {WordClass.iAdjective},
wordClass: WordClass.iAdjective,
);
final LemmatizationRule iAdjectiveNegative = LemmatizationRule.simple(
name: 'I adjective - negative form',
pattern: 'くない',
replacement: '',
validChildClasses: {WordClass.iAdjective},
wordClass: WordClass.iAdjective,
);
final LemmatizationRule iAdjectivePast = LemmatizationRule.simple(
name: 'I adjective - past form',
pattern: 'かった',
replacement: '',
validChildClasses: {WordClass.iAdjective},
wordClass: WordClass.iAdjective,
);
final LemmatizationRule iAdjectiveNegativePast = LemmatizationRule.simple(
name: 'I adjective - negative past form',
pattern: 'くなかった',
replacement: '',
validChildClasses: {WordClass.iAdjective},
wordClass: WordClass.iAdjective,
);
final LemmatizationRule iAdjectiveTe = LemmatizationRule.simple(
name: 'I adjective - te-form',
pattern: 'くて',
replacement: '',
validChildClasses: {WordClass.iAdjective},
wordClass: WordClass.iAdjective,
);
final LemmatizationRule iAdjectiveConditional = LemmatizationRule.simple(
name: 'I adjective - conditional form',
pattern: 'ければ',
replacement: '',
validChildClasses: {WordClass.iAdjective},
wordClass: WordClass.iAdjective,
);
final LemmatizationRule iAdjectiveVolitional = LemmatizationRule.simple(
name: 'I adjective - volitional form',
pattern: 'かろう',
replacement: '',
validChildClasses: {WordClass.iAdjective},
wordClass: WordClass.iAdjective,
);
final LemmatizationRule iAdjectiveContinuative = LemmatizationRule.simple(
name: 'I adjective - continuative form',
pattern: '',
replacement: '',
validChildClasses: {WordClass.iAdjective},
wordClass: WordClass.iAdjective,
);
final List<LemmatizationRule> iAdjectiveLemmatizationRules = List.unmodifiable([
iAdjectiveBase,
iAdjectiveNegative,
iAdjectivePast,
iAdjectiveNegativePast,
iAdjectiveTe,
iAdjectiveConditional,
iAdjectiveVolitional,
iAdjectiveContinuative,
]);

View File

@@ -1,241 +0,0 @@
import 'package:jadb/util/lemmatizer/lemmatizer.dart';
import 'package:jadb/util/text_filtering.dart';
List<Pattern> lookBehinds = [
kanjiRegex,
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
];
List<LemmatizationRule> ichidanVerbLemmatizationRules = [
LemmatizationRule.simple(
name: 'Ichidan verb - base form',
terminal: true,
pattern: '',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - negative form',
pattern: 'ない',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - past form',
pattern: '',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - te-form',
pattern: '',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - te-form with いる',
pattern: 'ている',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - te-form with いた',
pattern: 'ていた',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - conditional form',
pattern: 'れば',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - volitional form',
pattern: 'よう',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - potential form',
pattern: 'られる',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - passive form',
pattern: 'られる',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - causative form',
pattern: 'させる',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - causative passive form',
pattern: 'させられる',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - imperative form',
pattern: '',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - negative past form',
pattern: 'なかった',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - negative te-form',
pattern: 'なくて',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - negative conditional form',
pattern: 'なければ',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - negative volitional form',
pattern: 'なかろう',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - negative potential form',
pattern: 'られない',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - negative passive form',
pattern: 'られない',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - negative causative form',
pattern: 'させない',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - negative causative passive form',
pattern: 'させられない',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - negative imperative form',
pattern: 'るな',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - desire form',
pattern: 'たい',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - negative desire form',
pattern: 'たくない',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - past desire form',
pattern: 'たかった',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
LemmatizationRule.simple(
name: 'Ichidan verb - negative past desire form',
pattern: 'たくなかった',
replacement: '',
lookAheadBehind: lookBehinds,
validChildClasses: [WordClass.ichidanVerb],
wordClass: WordClass.ichidanVerb,
),
];

View File

@@ -0,0 +1,331 @@
import 'package:jadb/util/lemmatizer/lemmatizer.dart';
import 'package:jadb/util/text_filtering.dart';
final List<Pattern> _lookBehinds = [
kanjiRegex,
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
'',
];
final LemmatizationRule ichidanVerbBase = LemmatizationRule.simple(
name: 'Ichidan verb - base form',
terminal: true,
pattern: '',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbNegative = LemmatizationRule.simple(
name: 'Ichidan verb - negative form',
pattern: 'ない',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbPast = LemmatizationRule.simple(
name: 'Ichidan verb - past form',
pattern: '',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbTe = LemmatizationRule.simple(
name: 'Ichidan verb - te-form',
pattern: '',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbTeiru = LemmatizationRule.simple(
name: 'Ichidan verb - te-form with いる',
pattern: 'ている',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbTeita = LemmatizationRule.simple(
name: 'Ichidan verb - te-form with いた',
pattern: 'ていた',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbConditional = LemmatizationRule.simple(
name: 'Ichidan verb - conditional form',
pattern: 'れば',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbVolitional = LemmatizationRule.simple(
name: 'Ichidan verb - volitional form',
pattern: 'よう',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbPotential = LemmatizationRule.simple(
name: 'Ichidan verb - potential form',
pattern: 'られる',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbPassive = LemmatizationRule.simple(
name: 'Ichidan verb - passive form',
pattern: 'られる',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbCausative = LemmatizationRule.simple(
name: 'Ichidan verb - causative form',
pattern: 'させる',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbCausativePassive = LemmatizationRule.simple(
name: 'Ichidan verb - causative passive form',
pattern: 'させられる',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbImperative = LemmatizationRule.simple(
name: 'Ichidan verb - imperative form',
pattern: '',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbNegativePast = LemmatizationRule.simple(
name: 'Ichidan verb - negative past form',
pattern: 'なかった',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbNegativeTe = LemmatizationRule.simple(
name: 'Ichidan verb - negative te-form',
pattern: 'なくて',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbNegativeConditional =
LemmatizationRule.simple(
name: 'Ichidan verb - negative conditional form',
pattern: 'なければ',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbNegativeConditionalVariant1 =
LemmatizationRule.simple(
name: 'Ichidan verb - negative conditional form (informal variant)',
pattern: 'なきゃ',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbNegativeConditionalVariant2 =
LemmatizationRule.simple(
name: 'Ichidan verb - negative conditional form (informal variant)',
pattern: 'なくちゃ',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbNegativeConditionalVariant3 =
LemmatizationRule.simple(
name: 'Ichidan verb - negative conditional form (informal variant)',
pattern: 'ないと',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbNegativeVolitional =
LemmatizationRule.simple(
name: 'Ichidan verb - negative volitional form',
pattern: 'なかろう',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbNegativePotential = LemmatizationRule.simple(
name: 'Ichidan verb - negative potential form',
pattern: 'られない',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbNegativePassive = LemmatizationRule.simple(
name: 'Ichidan verb - negative passive form',
pattern: 'られない',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbNegativeCausative = LemmatizationRule.simple(
name: 'Ichidan verb - negative causative form',
pattern: 'させない',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbNegativeCausativePassive =
LemmatizationRule.simple(
name: 'Ichidan verb - negative causative passive form',
pattern: 'させられない',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbNegativeImperative =
LemmatizationRule.simple(
name: 'Ichidan verb - negative imperative form',
pattern: 'るな',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbDesire = LemmatizationRule.simple(
name: 'Ichidan verb - desire form',
pattern: 'たい',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbNegativeDesire = LemmatizationRule.simple(
name: 'Ichidan verb - negative desire form',
pattern: 'たくない',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbPastDesire = LemmatizationRule.simple(
name: 'Ichidan verb - past desire form',
pattern: 'たかった',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final LemmatizationRule ichidanVerbNegativePastDesire =
LemmatizationRule.simple(
name: 'Ichidan verb - negative past desire form',
pattern: 'たくなかった',
replacement: '',
lookAheadBehind: _lookBehinds,
validChildClasses: {WordClass.ichidanVerb},
wordClass: WordClass.ichidanVerb,
);
final List<LemmatizationRule> ichidanVerbLemmatizationRules =
List.unmodifiable([
ichidanVerbBase,
ichidanVerbNegative,
ichidanVerbPast,
ichidanVerbTe,
ichidanVerbTeiru,
ichidanVerbTeita,
ichidanVerbConditional,
ichidanVerbVolitional,
ichidanVerbPotential,
ichidanVerbPassive,
ichidanVerbCausative,
ichidanVerbCausativePassive,
ichidanVerbImperative,
ichidanVerbNegativePast,
ichidanVerbNegativeTe,
ichidanVerbNegativeConditional,
ichidanVerbNegativeConditionalVariant1,
ichidanVerbNegativeConditionalVariant2,
ichidanVerbNegativeConditionalVariant3,
ichidanVerbNegativeVolitional,
ichidanVerbNegativePotential,
ichidanVerbNegativePassive,
ichidanVerbNegativeCausative,
ichidanVerbNegativeCausativePassive,
ichidanVerbNegativeImperative,
ichidanVerbDesire,
ichidanVerbNegativeDesire,
ichidanVerbPastDesire,
ichidanVerbNegativePastDesire,
]);

View File

@@ -1,9 +1,9 @@
// Source: https://github.com/Kimtaro/ve/blob/master/lib/providers/japanese_transliterators.rb
const hiragana_syllabic_n = '';
const hiragana_small_tsu = '';
const hiraganaSyllabicN = '';
const hiraganaSmallTsu = '';
const Map<String, String> hiragana_to_latin = {
const Map<String, String> hiraganaToLatin = {
'': 'a',
'': 'i',
'': 'u',
@@ -209,7 +209,7 @@ const Map<String, String> hiragana_to_latin = {
'': 'yori',
};
const Map<String, String> latin_to_hiragana = {
const Map<String, String> latinToHiragana = {
'a': '',
'i': '',
'u': '',
@@ -481,12 +481,13 @@ const Map<String, String> latin_to_hiragana = {
'#~': '',
};
bool _smallTsu(String for_conversion) => for_conversion == hiragana_small_tsu;
bool _nFollowedByYuYeYo(String for_conversion, String kana) =>
for_conversion == hiragana_syllabic_n &&
bool _smallTsu(String forConversion) => forConversion == hiraganaSmallTsu;
bool _nFollowedByYuYeYo(String forConversion, String kana) =>
forConversion == hiraganaSyllabicN &&
kana.length > 1 &&
'やゆよ'.contains(kana.substring(1, 2));
/// Transliterates a string of hiragana characters to Latin script (romaji).
String transliterateHiraganaToLatin(String hiragana) {
String kana = hiragana;
String romaji = '';
@@ -495,17 +496,17 @@ String transliterateHiraganaToLatin(String hiragana) {
while (kana.isNotEmpty) {
final lengths = [if (kana.length > 1) 2, 1];
for (final length in lengths) {
final String for_conversion = kana.substring(0, length);
final String forConversion = kana.substring(0, length);
String? mora;
if (_smallTsu(for_conversion)) {
if (_smallTsu(forConversion)) {
geminate = true;
kana = kana.replaceRange(0, length, '');
break;
} else if (_nFollowedByYuYeYo(for_conversion, kana)) {
} else if (_nFollowedByYuYeYo(forConversion, kana)) {
mora = "n'";
}
mora ??= hiragana_to_latin[for_conversion];
mora ??= hiraganaToLatin[forConversion];
if (mora != null) {
if (geminate) {
@@ -516,7 +517,7 @@ String transliterateHiraganaToLatin(String hiragana) {
kana = kana.replaceRange(0, length, '');
break;
} else if (length == 1) {
romaji += for_conversion;
romaji += forConversion;
kana = kana.replaceRange(0, length, '');
}
}
@@ -524,48 +525,92 @@ String transliterateHiraganaToLatin(String hiragana) {
return romaji;
}
bool _doubleNFollowedByAIUEO(String for_conversion) =>
RegExp(r'^nn[aiueo]$').hasMatch(for_conversion);
bool _hasTableMatch(String for_conversion) =>
latin_to_hiragana[for_conversion] != null;
bool _hasDoubleConsonant(String for_conversion, int length) =>
for_conversion == 'tch' ||
(length == 2 &&
RegExp(r'^([kgsztdnbpmyrlwchf])\1$').hasMatch(for_conversion));
/// Returns a list of pairs of indices into the input and output strings,
/// indicating which characters in the input string correspond to which characters in the output string.
List<(int, int)> transliterateHiraganaToLatinSpan(String hiragana) {
String kana = hiragana;
String romaji = '';
final List<(int, int)> spans = [];
bool geminate = false;
int kanaIndex = 0;
while (kana.isNotEmpty) {
final lengths = [if (kana.length > 1) 2, 1];
for (final length in lengths) {
final String forConversion = kana.substring(0, length);
String? mora;
if (_smallTsu(forConversion)) {
geminate = true;
kana = kana.replaceRange(0, length, '');
break;
} else if (_nFollowedByYuYeYo(forConversion, kana)) {
mora = "n'";
}
mora ??= hiraganaToLatin[forConversion];
if (mora != null) {
if (geminate) {
geminate = false;
romaji += mora.substring(0, 1);
}
spans.add((kanaIndex, romaji.length));
romaji += mora;
kana = kana.replaceRange(0, length, '');
kanaIndex += length;
break;
} else if (length == 1) {
spans.add((kanaIndex, romaji.length));
romaji += forConversion;
kana = kana.replaceRange(0, length, '');
kanaIndex += length;
}
}
}
return spans;
}
bool _doubleNFollowedByAIUEO(String forConversion) =>
RegExp(r'^nn[aiueo]$').hasMatch(forConversion);
bool _hasTableMatch(String forConversion) =>
latinToHiragana[forConversion] != null;
bool _hasDoubleConsonant(String forConversion, int length) =>
forConversion == 'tch' ||
(length == 2 &&
RegExp(r'^([kgsztdnbpmyrlwchf])\1$').hasMatch(forConversion));
/// Transliterates a string of Latin script (romaji) to hiragana characters.
String transliterateLatinToHiragana(String latin) {
String romaji =
latin.toLowerCase().replaceAll('mb', 'nb').replaceAll('mp', 'np');
String romaji = latin
.toLowerCase()
.replaceAll('mb', 'nb')
.replaceAll('mp', 'np');
String kana = '';
while (romaji.isNotEmpty) {
final lengths = [
if (romaji.length > 2) 3,
if (romaji.length > 1) 2,
1,
];
final lengths = [if (romaji.length > 2) 3, if (romaji.length > 1) 2, 1];
for (final length in lengths) {
String? mora;
int for_removal = length;
final String for_conversion = romaji.substring(0, length);
int forRemoval = length;
final String forConversion = romaji.substring(0, length);
if (_doubleNFollowedByAIUEO(for_conversion)) {
mora = hiragana_syllabic_n;
for_removal = 1;
} else if (_hasTableMatch(for_conversion)) {
mora = latin_to_hiragana[for_conversion];
} else if (_hasDoubleConsonant(for_conversion, length)) {
mora = hiragana_small_tsu;
for_removal = 1;
if (_doubleNFollowedByAIUEO(forConversion)) {
mora = hiraganaSyllabicN;
forRemoval = 1;
} else if (_hasTableMatch(forConversion)) {
mora = latinToHiragana[forConversion];
} else if (_hasDoubleConsonant(forConversion, length)) {
mora = hiraganaSmallTsu;
forRemoval = 1;
}
if (mora != null) {
kana += mora;
romaji = romaji.replaceRange(0, for_removal, '');
romaji = romaji.replaceRange(0, forRemoval, '');
break;
} else if (length == 1) {
kana += for_conversion;
kana += forConversion;
romaji = romaji.replaceRange(0, 1, '');
}
}
@@ -574,37 +619,83 @@ String transliterateLatinToHiragana(String latin) {
return kana;
}
/// Returns a list of pairs of indices into the input and output strings,
/// indicating which characters in the input string correspond to which characters in the output string.
List<(int, int)> transliterateLatinToHiraganaSpan(String latin) {
String romaji = latin
.toLowerCase()
.replaceAll('mb', 'nb')
.replaceAll('mp', 'np');
String kana = '';
final List<(int, int)> spans = [];
int latinIndex = 0;
while (romaji.isNotEmpty) {
final lengths = [if (romaji.length > 2) 3, if (romaji.length > 1) 2, 1];
for (final length in lengths) {
String? mora;
int forRemoval = length;
final String forConversion = romaji.substring(0, length);
if (_doubleNFollowedByAIUEO(forConversion)) {
mora = hiraganaSyllabicN;
forRemoval = 1;
} else if (_hasTableMatch(forConversion)) {
mora = latinToHiragana[forConversion];
} else if (_hasDoubleConsonant(forConversion, length)) {
mora = hiraganaSmallTsu;
forRemoval = 1;
}
if (mora != null) {
spans.add((latinIndex, kana.length));
kana += mora;
romaji = romaji.replaceRange(0, forRemoval, '');
latinIndex += forRemoval;
break;
} else if (length == 1) {
spans.add((latinIndex, kana.length));
kana += forConversion;
romaji = romaji.replaceRange(0, 1, '');
latinIndex += 1;
}
}
}
return spans;
}
String _transposeCodepointsInRange(
String text,
int distance,
int rangeStart,
int rangeEnd,
) =>
String.fromCharCodes(
text.codeUnits
.map((c) => c + ((rangeStart <= c && c <= rangeEnd) ? distance : 0)),
);
) => String.fromCharCodes(
text.codeUnits.map(
(c) => c + ((rangeStart <= c && c <= rangeEnd) ? distance : 0),
),
);
/// Transliterates a string of kana characters (hiragana or katakana) to Latin script (romaji).
String transliterateKanaToLatin(String kana) =>
transliterateHiraganaToLatin(transliterateKatakanaToHiragana(kana));
/// Transliterates a string of Latin script (romaji) to katakana characters.
String transliterateLatinToKatakana(String latin) =>
transliterateHiraganaToKatakana(transliterateLatinToHiragana(latin));
/// Transliterates a string of katakana characters to hiragana characters.
String transliterateKatakanaToHiragana(String katakana) =>
_transposeCodepointsInRange(katakana, -96, 12449, 12534);
/// Transliterates a string of hiragana characters to katakana characters.
String transliterateHiraganaToKatakana(String hiragana) =>
_transposeCodepointsInRange(hiragana, 96, 12353, 12438);
String transliterateFullwidthRomajiToHalfwidth(String halfwidth) =>
_transposeCodepointsInRange(
_transposeCodepointsInRange(
halfwidth,
-65248,
65281,
65374,
),
_transposeCodepointsInRange(halfwidth, -65248, 65281, 65374),
-12256,
12288,
12288,
@@ -612,12 +703,7 @@ String transliterateFullwidthRomajiToHalfwidth(String halfwidth) =>
String transliterateHalfwidthRomajiToFullwidth(String halfwidth) =>
_transposeCodepointsInRange(
_transposeCodepointsInRange(
halfwidth,
65248,
33,
126,
),
_transposeCodepointsInRange(halfwidth, 65248, 33, 126),
12256,
32,
32,

View File

@@ -1,3 +1,3 @@
String escapeStringValue(String value) {
return "'" + value.replaceAll("'", "''") + "'";
return "'${value.replaceAll("'", "''")}'";
}

View File

@@ -1,3 +1,16 @@
CREATE TABLE "JMdict_Version" (
"version" VARCHAR(10) PRIMARY KEY NOT NULL,
"date" DATE NOT NULL,
"hash" VARCHAR(64) NOT NULL
) WITHOUT ROWID;
CREATE TRIGGER "JMdict_Version_SingleRow"
BEFORE INSERT ON "JMdict_Version"
WHEN (SELECT COUNT(*) FROM "JMdict_Version") >= 1
BEGIN
SELECT RAISE(FAIL, 'Only one row allowed in JMdict_Version');
END;
CREATE TABLE "JMdict_InfoDialect" (
"id" VARCHAR(4) PRIMARY KEY NOT NULL,
"description" TEXT NOT NULL

View File

@@ -1,3 +1,16 @@
CREATE TABLE "JMdict_JLPT_Version" (
"version" VARCHAR(10) PRIMARY KEY NOT NULL,
"date" DATE NOT NULL,
"hash" VARCHAR(64) NOT NULL
) WITHOUT ROWID;
CREATE TRIGGER "JMdict_JLPT_Version_SingleRow"
BEFORE INSERT ON "JMdict_JLPT_Version"
WHEN (SELECT COUNT(*) FROM "JMdict_JLPT_Version") >= 1
BEGIN
SELECT RAISE(FAIL, 'Only one row allowed in JMdict_JLPT_Version');
END;
CREATE TABLE "JMdict_JLPTTag" (
"entryId" INTEGER NOT NULL,
"jlptLevel" CHAR(2) NOT NULL CHECK ("jlptLevel" in ('N5', 'N4', 'N3', 'N2', 'N1')),

View File

@@ -1,5 +1,6 @@
CREATE TABLE "JMdict_EntryScore" (
"type" TEXT NOT NULL CHECK ("type" IN ('reading', 'kanji')),
"type" CHAR(1) NOT NULL CHECK ("type" IN ('r', 'k')),
"entryId" INTEGER NOT NULL REFERENCES "JMdict_Entry"("entryId"),
"elementId" INTEGER NOT NULL,
"score" INTEGER NOT NULL DEFAULT 0,
"common" BOOLEAN NOT NULL DEFAULT FALSE,
@@ -19,7 +20,8 @@ CREATE INDEX "JMdict_EntryScore_byType_byCommon" ON "JMdict_EntryScore"("type",
CREATE VIEW "JMdict_EntryScoreView_Reading" AS
SELECT
'reading' AS "type",
'r' AS "type",
"JMdict_ReadingElement"."entryId",
"JMdict_ReadingElement"."elementId",
(
"news" IS 1
@@ -50,7 +52,8 @@ LEFT JOIN "JMdict_JLPTTag" USING ("entryId");
CREATE VIEW "JMdict_EntryScoreView_Kanji" AS
SELECT
'kanji' AS "type",
'k' AS "type",
"JMdict_KanjiElement"."entryId",
"JMdict_KanjiElement"."elementId",
(
"news" IS 1
@@ -94,11 +97,12 @@ AFTER INSERT ON "JMdict_ReadingElement"
BEGIN
INSERT INTO "JMdict_EntryScore" (
"type",
"entryId",
"elementId",
"score",
"common"
)
SELECT "type", "elementId", "score", "common"
SELECT "type", "entryId", "elementId", "score", "common"
FROM "JMdict_EntryScoreView_Reading"
WHERE "elementId" = NEW."elementId";
END;
@@ -119,7 +123,7 @@ CREATE TRIGGER "JMdict_EntryScore_Delete_JMdict_ReadingElement"
AFTER DELETE ON "JMdict_ReadingElement"
BEGIN
DELETE FROM "JMdict_EntryScore"
WHERE "type" = 'reading'
WHERE "type" = 'r'
AND "elementId" = OLD."elementId";
END;
@@ -130,11 +134,12 @@ AFTER INSERT ON "JMdict_KanjiElement"
BEGIN
INSERT INTO "JMdict_EntryScore" (
"type",
"entryId",
"elementId",
"score",
"common"
)
SELECT "type", "elementId", "score", "common"
SELECT "type", "entryId", "elementId", "score", "common"
FROM "JMdict_EntryScoreView_Kanji"
WHERE "elementId" = NEW."elementId";
END;
@@ -155,7 +160,7 @@ CREATE TRIGGER "JMdict_EntryScore_Delete_JMdict_KanjiElement"
AFTER DELETE ON "JMdict_KanjiElement"
BEGIN
DELETE FROM "JMdict_EntryScore"
WHERE "type" = 'kanji'
WHERE "type" = 'k'
AND "elementId" = OLD."elementId";
END;
@@ -169,26 +174,9 @@ BEGIN
"score" = "JMdict_EntryScoreView"."score",
"common" = "JMdict_EntryScoreView"."common"
FROM "JMdict_EntryScoreView"
WHERE
(
(
"JMdict_EntryScoreView"."type" = 'kanji'
AND
"JMdict_EntryScoreView"."elementId" IN (
SELECT "elementId" FROM "JMdict_KanjiElement" WHERE "entryId" = NEW."entryId"
)
)
OR
(
"JMdict_EntryScoreView"."type" = 'reading'
AND
"JMdict_EntryScoreView"."elementId" IN (
SELECT "elementId" FROM "JMdict_ReadingElement" WHERE "entryId" = NEW."entryId"
)
)
)
AND "JMdict_EntryScoreView"."entryId" = "JMdict_EntryScore"."entryId"
AND "JMdict_EntryScoreView"."reading" = "JMdict_EntryScore"."reading";
WHERE "JMdict_EntryScoreView"."entryId" = NEW."entryId"
AND "JMdict_EntryScore"."entryId" = NEW."entryId"
AND "JMdict_EntryScoreView"."elementId" = "JMdict_EntryScore"."elementId";
END;
CREATE TRIGGER "JMdict_EntryScore_Update_JMdict_JLPTTag"
@@ -200,26 +188,9 @@ BEGIN
"score" = "JMdict_EntryScoreView"."score",
"common" = "JMdict_EntryScoreView"."common"
FROM "JMdict_EntryScoreView"
WHERE
(
(
"JMdict_EntryScoreView"."type" = 'kanji'
AND
"JMdict_EntryScoreView"."elementId" IN (
SELECT "elementId" FROM "JMdict_KanjiElement" WHERE "entryId" = NEW."entryId"
)
)
OR
(
"JMdict_EntryScoreView"."type" = 'reading'
AND
"JMdict_EntryScoreView"."elementId" IN (
SELECT "elementId" FROM "JMdict_ReadingElement" WHERE "entryId" = NEW."entryId"
)
)
)
AND "JMdict_EntryScoreView"."entryId" = "JMdict_EntryScore"."entryId"
AND "JMdict_EntryScoreView"."reading" = "JMdict_EntryScore"."reading";
WHERE "JMdict_EntryScoreView"."entryId" = NEW."entryId"
AND "JMdict_EntryScore"."entryId" = NEW."entryId"
AND "JMdict_EntryScoreView"."elementId" = "JMdict_EntryScore"."elementId";
END;
CREATE TRIGGER "JMdict_EntryScore_Delete_JMdict_JLPTTag"
@@ -230,24 +201,7 @@ BEGIN
"score" = "JMdict_EntryScoreView"."score",
"common" = "JMdict_EntryScoreView"."common"
FROM "JMdict_EntryScoreView"
WHERE
(
(
"JMdict_EntryScoreView"."type" = 'kanji'
AND
"JMdict_EntryScoreView"."elementId" IN (
SELECT "elementId" FROM "JMdict_KanjiElement" WHERE "entryId" = OLD."entryId"
)
)
OR
(
"JMdict_EntryScoreView"."type" = 'reading'
AND
"JMdict_EntryScoreView"."elementId" IN (
SELECT "elementId" FROM "JMdict_ReadingElement" WHERE "entryId" = OLD."entryId"
)
)
)
AND "JMdict_EntryScoreView"."entryId" = "JMdict_EntryScore"."entryId"
AND "JMdict_EntryScoreView"."reading" = "JMdict_EntryScore"."reading";
WHERE "JMdict_EntryScoreView"."entryId" = OLD."entryId"
AND "JMdict_EntryScore"."entryId" = OLD."entryId"
AND "JMdict_EntryScoreView"."elementId" = "JMdict_EntryScore"."elementId";
END;

View File

@@ -1,3 +1,16 @@
CREATE TABLE "RADKFILE_Version" (
"version" VARCHAR(10) PRIMARY KEY NOT NULL,
"date" DATE NOT NULL,
"hash" VARCHAR(64) NOT NULL
) WITHOUT ROWID;
CREATE TRIGGER "RADKFILE_Version_SingleRow"
BEFORE INSERT ON "RADKFILE_Version"
WHEN (SELECT COUNT(*) FROM "RADKFILE_Version") >= 1
BEGIN
SELECT RAISE(FAIL, 'Only one row allowed in RADKFILE_Version');
END;
CREATE TABLE "RADKFILE" (
"kanji" CHAR(1) NOT NULL,
"radical" CHAR(1) NOT NULL,

View File

@@ -1,3 +1,16 @@
CREATE TABLE "KANJIDIC_Version" (
"version" VARCHAR(10) PRIMARY KEY NOT NULL,
"date" DATE NOT NULL,
"hash" VARCHAR(64) NOT NULL
) WITHOUT ROWID;
CREATE TRIGGER "KANJIDIC_Version_SingleRow"
BEFORE INSERT ON "KANJIDIC_Version"
WHEN (SELECT COUNT(*) FROM "KANJIDIC_Version") >= 1
BEGIN
SELECT RAISE(FAIL, 'Only one row allowed in KANJIDIC_Version');
END;
CREATE TABLE "KANJIDIC_Character" (
"literal" CHAR(1) NOT NULL PRIMARY KEY,
"grade" INTEGER CHECK ("grade" BETWEEN 1 AND 10),

View File

@@ -65,7 +65,7 @@ JOIN "JMdict_KanjiElement"
ON "JMdict_KanjiElementFTS"."entryId" = "JMdict_KanjiElement"."entryId"
AND "JMdict_KanjiElementFTS"."reading" LIKE '%' || "JMdict_KanjiElement"."reading"
JOIN "JMdict_EntryScore"
ON "JMdict_EntryScore"."type" = 'kanji'
ON "JMdict_EntryScore"."type" = 'k'
AND "JMdict_KanjiElement"."entryId" = "JMdict_EntryScore"."entryId"
AND "JMdict_KanjiElement"."reading" = "JMdict_EntryScore"."reading"
WHERE "JMdict_EntryScore"."common" = 1;
@@ -78,9 +78,9 @@ CREATE VIEW "JMdict_CombinedEntryScore"
AS
SELECT
CASE
WHEN "JMdict_EntryScore"."type" = 'kanji'
WHEN "JMdict_EntryScore"."type" = 'k'
THEN (SELECT entryId FROM "JMdict_KanjiElement" WHERE "elementId" = "JMdict_EntryScore"."elementId")
WHEN "JMdict_EntryScore"."type" = 'reading'
WHEN "JMdict_EntryScore"."type" = 'r'
THEN (SELECT entryId FROM "JMdict_ReadingElement" WHERE "elementId" = "JMdict_EntryScore"."elementId")
END AS "entryId",
MAX("JMdict_EntryScore"."score") AS "score",

View File

@@ -0,0 +1,45 @@
CREATE TABLE "KanjiVG_Version" (
"version" VARCHAR(10) PRIMARY KEY NOT NULL,
"date" DATE NOT NULL,
"hash" VARCHAR(64) NOT NULL
) WITHOUT ROWID;
CREATE TRIGGER "KanjiVG_Version_SingleRow"
BEFORE INSERT ON "KanjiVG_Version"
WHEN (SELECT COUNT(*) FROM "KanjiVG_Version") >= 1
BEGIN
SELECT RAISE(FAIL, 'Only one row allowed in KanjiVG_Version');
END;
CREATE TABLE "KanjiVG_Entry" (
"character" CHAR(1) PRIMARY KEY NOT NULL
) WITHOUT ROWID;
CREATE TABLE "KanjiVG_StrokeNumber" (
"character" CHAR(1) NOT NULL REFERENCES "KanjiVG_Entry"("character"),
"strokeNum" INTEGER NOT NULL,
"x" REAL NOT NULL,
"y" REAL NOT NULL,
PRIMARY KEY ("character", "strokeNum")
) WITHOUT ROWID;
CREATE TABLE "KanjiVG_Path" (
"character" CHAR(1) NOT NULL REFERENCES "KanjiVG_Entry"("character"),
"pathId" TEXT NOT NULL,
"type" VARCHAR(10) NOT NULL,
"svgPath" TEXT NOT NULL,
PRIMARY KEY ("character", "pathId")
) WITHOUT ROWID;
CREATE TABLE "KanjiVG_PathGroup" (
"character" CHAR(1) NOT NULL REFERENCES "KanjiVG_Entry"("character"),
"groupId" TEXT NOT NULL,
"parentGroupId" TEXT REFERENCES "KanjiVG_PathGroup"("groupId"),
"element" TEXT,
"original" TEXT,
"position" VARCHAR(10),
"radical" TEXT,
"part" INTEGER,
PRIMARY KEY ("character", "groupId"),
CHECK ("position" IN ('bottom', 'kamae', 'kamaec', 'left', 'middle', 'nyo', 'nyoc', 'right', 'tare', 'tarec', 'top') OR "position" IS NULL)
) WITHOUT ROWID;

View File

@@ -7,6 +7,29 @@ buildDartApplication {
version = "1.0.0";
inherit src;
dartEntryPoints."bin/jadb" = "bin/jadb.dart";
# NOTE: the default dart hooks are using `dart compile`, which is not able to call the
# new dart build hooks required to use package:sqlite3 >= 3.0.0. So we override
# these phases to use `dart build` instead.
buildPhase = ''
runHook preBuild
mkdir -p "$out/bin"
dart build cli --target "bin/jadb.dart"
runHook postBuild
'';
installPhase = ''
runHook preInstall
mkdir -p "$out"
mv build/cli/*/bundle/* "$out/"
runHook postInstall
'';
autoPubspecLock = ../pubspec.lock;
meta.mainProgram = "jadb";

View File

@@ -5,18 +5,18 @@ packages:
dependency: transitive
description:
name: _fe_analyzer_shared
sha256: e55636ed79578b9abca5fecf9437947798f5ef7456308b5cb85720b793eac92f
sha256: "3b19a47f6ea7c2632760777c78174f47f6aec1e05f0cd611380d4593b8af1dbc"
url: "https://pub.dev"
source: hosted
version: "82.0.0"
version: "96.0.0"
analyzer:
dependency: transitive
description:
name: analyzer
sha256: "904ae5bb474d32c38fb9482e2d925d5454cda04ddd0e55d2e6826bc72f6ba8c0"
sha256: "0c516bc4ad36a1a75759e54d5047cb9d15cded4459df01aa35a0b5ec7db2c2a0"
url: "https://pub.dev"
source: hosted
version: "7.4.5"
version: "10.2.0"
args:
dependency: "direct main"
description:
@@ -49,6 +49,14 @@ packages:
url: "https://pub.dev"
source: hosted
version: "0.2.0"
code_assets:
dependency: transitive
description:
name: code_assets
sha256: "83ccdaa064c980b5596c35dd64a8d3ecc68620174ab9b90b6343b753aa721687"
url: "https://pub.dev"
source: hosted
version: "1.0.0"
collection:
dependency: "direct main"
description:
@@ -69,42 +77,42 @@ packages:
dependency: transitive
description:
name: coverage
sha256: "802bd084fb82e55df091ec8ad1553a7331b61c08251eef19a508b6f3f3a9858d"
sha256: "5da775aa218eaf2151c721b16c01c7676fbfdd99cebba2bf64e8b807a28ff94d"
url: "https://pub.dev"
source: hosted
version: "1.13.1"
version: "1.15.0"
crypto:
dependency: transitive
description:
name: crypto
sha256: "1e445881f28f22d6140f181e07737b22f1e099a5e1ff94b0af2f9e4a463f4855"
sha256: c8ea0233063ba03258fbcf2ca4d6dadfefe14f02fab57702265467a19f27fadf
url: "https://pub.dev"
source: hosted
version: "3.0.6"
version: "3.0.7"
csv:
dependency: "direct main"
description:
name: csv
sha256: c6aa2679b2a18cb57652920f674488d89712efaf4d3fdf2e537215b35fc19d6c
sha256: bef2950f7a753eb82f894a2eabc3072e73cf21c17096296a5a992797e50b1d0d
url: "https://pub.dev"
source: hosted
version: "6.0.0"
version: "7.1.0"
equatable:
dependency: "direct main"
description:
name: equatable
sha256: "567c64b3cb4cf82397aac55f4f0cbd3ca20d77c6c03bedbc4ceaddc08904aef7"
sha256: "3e0141505477fd8ad55d6eb4e7776d3fe8430be8e497ccb1521370c3f21a3e2b"
url: "https://pub.dev"
source: hosted
version: "2.0.7"
version: "2.0.8"
ffi:
dependency: transitive
description:
name: ffi
sha256: "289279317b4b16eb2bb7e271abccd4bf84ec9bdcbe999e278a94b804f5630418"
sha256: "6d7fd89431262d8f3125e81b50d3847a091d846eafcd4fdb88dd06f36d705a45"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
version: "2.2.0"
file:
dependency: transitive
description:
@@ -129,6 +137,14 @@ packages:
url: "https://pub.dev"
source: hosted
version: "2.1.3"
hooks:
dependency: transitive
description:
name: hooks
sha256: "7a08a0d684cb3b8fb604b78455d5d352f502b68079f7b80b831c62220ab0a4f6"
url: "https://pub.dev"
source: hosted
version: "1.0.1"
http_multi_server:
dependency: transitive
description:
@@ -153,22 +169,14 @@ packages:
url: "https://pub.dev"
source: hosted
version: "1.0.5"
js:
dependency: transitive
description:
name: js
sha256: "53385261521cc4a0c4658fd0ad07a7d14591cf8fc33abbceae306ddb974888dc"
url: "https://pub.dev"
source: hosted
version: "0.7.2"
lints:
dependency: "direct dev"
description:
name: lints
sha256: c35bb79562d980e9a453fc715854e1ed39e24e7d0297a880ef54e17f9874a9d7
sha256: "12f842a479589fea194fe5c5a3095abc7be0c1f2ddfa9a0e76aed1dbd26a87df"
url: "https://pub.dev"
source: hosted
version: "5.1.1"
version: "6.1.0"
logging:
dependency: transitive
description:
@@ -181,18 +189,18 @@ packages:
dependency: transitive
description:
name: matcher
sha256: dc58c723c3c24bf8d3e2d3ad3f2f9d7bd9cf43ec6feaa64181775e60190153f2
sha256: "12956d0ad8390bbcc63ca2e1469c0619946ccb52809807067a7020d57e647aa6"
url: "https://pub.dev"
source: hosted
version: "0.12.17"
version: "0.12.18"
meta:
dependency: transitive
description:
name: meta
sha256: "23f08335362185a5ea2ad3a4e597f1375e78bce8a040df5c600c8d3552ef2394"
sha256: "9f29b9bcc8ee287b1a31e0d01be0eae99a930dbffdaecf04b3f3d82a969f296f"
url: "https://pub.dev"
source: hosted
version: "1.17.0"
version: "1.18.1"
mime:
dependency: transitive
description:
@@ -201,6 +209,14 @@ packages:
url: "https://pub.dev"
source: hosted
version: "2.0.0"
native_toolchain_c:
dependency: transitive
description:
name: native_toolchain_c
sha256: "89e83885ba09da5fdf2cdacc8002a712ca238c28b7f717910b34bcd27b0d03ac"
url: "https://pub.dev"
source: hosted
version: "0.17.4"
node_preamble:
dependency: transitive
description:
@@ -218,7 +234,7 @@ packages:
source: hosted
version: "2.2.0"
path:
dependency: transitive
dependency: "direct main"
description:
name: path
sha256: "75cca69d1490965be98c73ceaea117e8a04dd21217b37b292c9ddbec0d955bc5"
@@ -229,18 +245,18 @@ packages:
dependency: transitive
description:
name: petitparser
sha256: "07c8f0b1913bcde1ff0d26e57ace2f3012ccbf2b204e070290dad3bb22797646"
sha256: "91bd59303e9f769f108f8df05e371341b15d59e995e6806aefab827b58336675"
url: "https://pub.dev"
source: hosted
version: "6.1.0"
version: "7.0.2"
pool:
dependency: transitive
description:
name: pool
sha256: "20fe868b6314b322ea036ba325e6fc0711a22948856475e2c2b6306e8ab39c2a"
sha256: "978783255c543aa3586a1b3c21f6e9d720eb315376a915872c61ef8b5c20177d"
url: "https://pub.dev"
source: hosted
version: "1.5.1"
version: "1.5.2"
pub_semver:
dependency: transitive
description:
@@ -301,34 +317,34 @@ packages:
dependency: transitive
description:
name: source_span
sha256: "254ee5351d6cb365c859e20ee823c3bb479bf4a293c22d17a9f1bf144ce86f7c"
sha256: "56a02f1f4cd1a2d96303c0144c93bd6d909eea6bee6bf5a0e0b685edbd4c47ab"
url: "https://pub.dev"
source: hosted
version: "1.10.1"
version: "1.10.2"
sqflite_common:
dependency: "direct main"
description:
name: sqflite_common
sha256: "84731e8bfd8303a3389903e01fb2141b6e59b5973cacbb0929021df08dddbe8b"
sha256: "6ef422a4525ecc601db6c0a2233ff448c731307906e92cabc9ba292afaae16a6"
url: "https://pub.dev"
source: hosted
version: "2.5.5"
version: "2.5.6"
sqflite_common_ffi:
dependency: "direct main"
description:
name: sqflite_common_ffi
sha256: "1f3ef3888d3bfbb47785cc1dda0dc7dd7ebd8c1955d32a9e8e9dae1e38d1c4c1"
sha256: c59fcdc143839a77581f7a7c4de018e53682408903a0a0800b95ef2dc4033eff
url: "https://pub.dev"
source: hosted
version: "2.3.5"
version: "2.4.0+2"
sqlite3:
dependency: transitive
dependency: "direct main"
description:
name: sqlite3
sha256: "310af39c40dd0bb2058538333c9d9840a2725ae0b9f77e4fd09ad6696aa8f66e"
sha256: b7cf6b37667f6a921281797d2499ffc60fb878b161058d422064f0ddc78f6aa6
url: "https://pub.dev"
source: hosted
version: "2.7.5"
version: "3.1.6"
stack_trace:
dependency: transitive
description:
@@ -357,10 +373,10 @@ packages:
dependency: transitive
description:
name: synchronized
sha256: "0669c70faae6270521ee4f05bffd2919892d42d1276e6c495be80174b6bc0ef6"
sha256: c254ade258ec8282947a0acbbc90b9575b4f19673533ee46f2f6e9b3aeefd7c0
url: "https://pub.dev"
source: hosted
version: "3.3.1"
version: "3.4.0"
term_glyph:
dependency: transitive
description:
@@ -373,26 +389,26 @@ packages:
dependency: "direct dev"
description:
name: test
sha256: "0561f3a2cfd33d10232360f16dfcab9351cfb7ad9b23e6cd6e8c7fb0d62c7ac3"
sha256: "54c516bbb7cee2754d327ad4fca637f78abfc3cbcc5ace83b3eda117e42cd71a"
url: "https://pub.dev"
source: hosted
version: "1.26.1"
version: "1.29.0"
test_api:
dependency: transitive
description:
name: test_api
sha256: "522f00f556e73044315fa4585ec3270f1808a4b186c936e612cab0b565ff1e00"
sha256: "93167629bfc610f71560ab9312acdda4959de4df6fac7492c89ff0d3886f6636"
url: "https://pub.dev"
source: hosted
version: "0.7.6"
version: "0.7.9"
test_core:
dependency: transitive
description:
name: test_core
sha256: "8619a9a45be044b71fe2cd6b77b54fd60f1c67904c38d48706e2852a2bda1c60"
sha256: "394f07d21f0f2255ec9e3989f21e54d3c7dc0e6e9dbce160e5a9c1a6be0e2943"
url: "https://pub.dev"
source: hosted
version: "0.6.10"
version: "0.6.15"
typed_data:
dependency: transitive
description:
@@ -405,18 +421,18 @@ packages:
dependency: transitive
description:
name: vm_service
sha256: ddfa8d30d89985b96407efce8acbdd124701f96741f2d981ca860662f1c0dc02
sha256: "45caa6c5917fa127b5dbcfbd1fa60b14e583afdc08bfc96dda38886ca252eb60"
url: "https://pub.dev"
source: hosted
version: "15.0.0"
version: "15.0.2"
watcher:
dependency: transitive
description:
name: watcher
sha256: "69da27e49efa56a15f8afe8f4438c4ec02eff0a117df1b22ea4aad194fe1c104"
sha256: "1398c9f081a753f9226febe8900fce8f7d0a67163334e1c94a2438339d79d635"
url: "https://pub.dev"
source: hosted
version: "1.1.1"
version: "1.2.1"
web:
dependency: transitive
description:
@@ -453,10 +469,10 @@ packages:
dependency: "direct main"
description:
name: xml
sha256: b015a8ad1c488f66851d762d3090a21c600e479dc75e68328c52774040cf9226
sha256: "971043b3a0d3da28727e40ed3e0b5d18b742fa5a68665cca88e74b7876d5e025"
url: "https://pub.dev"
source: hosted
version: "6.5.0"
version: "6.6.1"
yaml:
dependency: transitive
description:
@@ -466,4 +482,4 @@ packages:
source: hosted
version: "3.1.3"
sdks:
dart: ">=3.7.0 <4.0.0"
dart: ">=3.10.1 <4.0.0"

View File

@@ -4,24 +4,31 @@ version: 1.0.0
homepage: https://git.pvv.ntnu.no/oysteikt/jadb
environment:
sdk: '>=3.2.0 <4.0.0'
sdk: '^3.9.0'
dependencies:
args: ^2.7.0
collection: ^1.19.0
csv: ^6.0.0
csv: ^7.1.0
equatable: ^2.0.0
path: ^1.9.1
sqflite_common: ^2.5.0
sqflite_common_ffi: ^2.3.0
sqlite3: ^3.1.6
xml: ^6.5.0
dev_dependencies:
lints: ^5.0.0
lints: ^6.0.0
test: ^1.25.15
executables:
jadb: jadb
hooks:
user_defines:
sqlite3:
source: system
topics:
- database
- dictionary

View File

@@ -0,0 +1,21 @@
import 'package:collection/collection.dart';
import 'package:jadb/const_data/kanji_grades.dart';
import 'package:test/test.dart';
void main() {
test('All constant kanji in jouyouKanjiByGrades are 2136 in total', () {
expect(jouyouKanjiByGrades.values.flattenedToSet.length, 2136);
});
// test('All constant kanji in jouyouKanjiByGrades are present in KANJIDIC2', () {
// });
// test('All constant kanji in jouyouKanjiByGrades have matching grade as in KANJIDIC2', () {
// });
// test('All constant kanji in jouyouKanjiByGradesAndStrokeCount have matching stroke count as in KANJIDIC2', () {
// });
}

View File

@@ -0,0 +1,17 @@
import 'package:collection/collection.dart';
import 'package:jadb/const_data/radicals.dart';
import 'package:test/test.dart';
void main() {
test('All constant radicals are 253 in total', () {
expect(radicals.values.flattenedToSet.length, 253);
});
// test('All constant radicals have at least 1 associated kanji in KANJIDIC2', () {
// });
// test('All constant radicals match the stroke order listed in KANJIDIC2', () {
// });
}

View File

@@ -1,9 +0,0 @@
import 'package:collection/collection.dart';
import 'package:jadb/const_data/kanji_grades.dart';
import 'package:test/test.dart';
void main() {
test("Assert 2136 kanji in jouyou set", () {
expect(JOUYOU_KANJI_BY_GRADES.values.flattenedToSet.length, 2136);
});
}

View File

@@ -1,30 +1,20 @@
import 'dart:ffi';
import 'dart:io';
import 'package:jadb/models/create_empty_db.dart';
import 'package:jadb/search.dart';
import 'package:sqflite_common_ffi/sqflite_ffi.dart';
// import 'package:sqlite3/open.dart';
import 'package:test/test.dart';
import 'package:sqlite3/open.dart';
Future<DatabaseExecutor> setup_inmemory_database() async {
final libsqlitePath = Platform.environment['LIBSQLITE_PATH'];
Future<DatabaseExecutor> setupInMemoryDatabase() async {
final dbConnection = await createDatabaseFactoryFfi().openDatabase(
':memory:',
);
if (libsqlitePath == null) {
throw Exception("LIBSQLITE_PATH is not set");
}
final db_connection = await createDatabaseFactoryFfi(
ffiInit: () =>
open.overrideForAll(() => DynamicLibrary.open(libsqlitePath)),
).openDatabase(':memory:');
return db_connection;
return dbConnection;
}
void main() {
test("Create empty db", () async {
final db = await setup_inmemory_database();
test('Create empty db', () async {
final db = await setupInMemoryDatabase();
await createEmptyDb(db);

View File

@@ -4,29 +4,49 @@ import 'package:test/test.dart';
import 'setup_database_connection.dart';
void main() {
test("Filter kanji", () async {
final connection = await setup_database_connection();
test('Filter kanji', () async {
final connection = await setupDatabaseConnection();
final result = await connection.filterKanji(
[
"a",
"b",
"c",
"",
"",
"",
"",
"",
"",
".",
"!",
"@",
";",
"",
],
deduplicate: false,
);
final result = await connection.filterKanji([
'a',
'b',
'c',
'',
'',
'',
'',
'',
'',
'.',
'!',
'@',
';',
'',
], deduplicate: false);
expect(result.join(), "漢字地字");
expect(result.join(), '漢字地字');
});
test('Filter kanji - deduplicate', () async {
final connection = await setupDatabaseConnection();
final result = await connection.filterKanji([
'a',
'b',
'c',
'',
'',
'',
'',
'',
'',
'.',
'!',
'@',
';',
'',
], deduplicate: true);
expect(result.join(), '漢字地');
});
}

View File

@@ -5,17 +5,17 @@ import 'package:test/test.dart';
import 'setup_database_connection.dart';
void main() {
test("Search a kanji", () async {
final connection = await setup_database_connection();
test('Search a kanji', () async {
final connection = await setupDatabaseConnection();
final result = await connection.jadbSearchKanji('');
expect(result, isNotNull);
});
group("Search all jouyou kanji", () {
JOUYOU_KANJI_BY_GRADES.forEach((grade, characters) {
test("Search all kanji in grade $grade", () async {
final connection = await setup_database_connection();
group('Search all jouyou kanji', () {
jouyouKanjiByGrades.forEach((grade, characters) {
test('Search all kanji in grade $grade', () async {
final connection = await setupDatabaseConnection();
for (final character in characters) {
final result = await connection.jadbSearchKanji(character);

View File

@@ -0,0 +1,257 @@
import 'package:jadb/models/common/jlpt_level.dart';
import 'package:jadb/models/word_search/word_search_match_span.dart';
import 'package:jadb/models/word_search/word_search_result.dart';
import 'package:jadb/models/word_search/word_search_ruby.dart';
import 'package:jadb/models/word_search/word_search_sense.dart';
import 'package:jadb/models/word_search/word_search_sources.dart';
import 'package:test/test.dart';
void main() {
test('Infer match whole word', () {
final wordSearchResult = WordSearchResult(
entryId: 0,
score: 0,
isCommon: false,
jlptLevel: JlptLevel.none,
kanjiInfo: {},
readingInfo: {},
japanese: [WordSearchRuby(base: '仮名')],
senses: [],
sources: WordSearchSources.empty(),
);
wordSearchResult.inferMatchSpans('仮名');
expect(wordSearchResult.matchSpans, [
WordSearchMatchSpan(
spanType: WordSearchMatchSpanType.kanji,
start: 0,
end: 2,
index: 0,
),
]);
});
test('Infer match part of word', () {
final wordSearchResult = WordSearchResult(
entryId: 0,
score: 0,
isCommon: false,
jlptLevel: JlptLevel.none,
kanjiInfo: {},
readingInfo: {},
japanese: [WordSearchRuby(base: '仮名')],
senses: [],
sources: WordSearchSources.empty(),
);
wordSearchResult.inferMatchSpans('');
expect(wordSearchResult.matchSpans, [
WordSearchMatchSpan(
spanType: WordSearchMatchSpanType.kanji,
start: 0,
end: 1,
index: 0,
),
]);
});
test('Infer match in middle of word', () {
final wordSearchResult = WordSearchResult(
entryId: 0,
score: 0,
isCommon: false,
jlptLevel: JlptLevel.none,
kanjiInfo: {},
readingInfo: {},
japanese: [WordSearchRuby(base: 'ありがとう')],
senses: [],
sources: WordSearchSources.empty(),
);
wordSearchResult.inferMatchSpans('りがと');
expect(wordSearchResult.matchSpans, [
WordSearchMatchSpan(
spanType: WordSearchMatchSpanType.kanji,
start: 1,
end: 4,
index: 0,
),
]);
});
test('Infer match in furigana', () {
final wordSearchResult = WordSearchResult(
entryId: 0,
score: 0,
isCommon: false,
jlptLevel: JlptLevel.none,
kanjiInfo: {},
readingInfo: {},
japanese: [WordSearchRuby(base: '仮名', furigana: 'かな')],
senses: [],
sources: WordSearchSources.empty(),
);
wordSearchResult.inferMatchSpans('かな');
expect(wordSearchResult.matchSpans, [
WordSearchMatchSpan(
spanType: WordSearchMatchSpanType.kana,
start: 0,
end: 2,
index: 0,
),
]);
});
test('Infer match in sense', () {
final wordSearchResult = WordSearchResult(
entryId: 0,
score: 0,
isCommon: false,
jlptLevel: JlptLevel.none,
kanjiInfo: {},
readingInfo: {},
japanese: [WordSearchRuby(base: '仮名')],
senses: [
WordSearchSense(
antonyms: [],
dialects: [],
englishDefinitions: ['kana'],
fields: [],
info: [],
languageSource: [],
misc: [],
partsOfSpeech: [],
restrictedToKanji: [],
restrictedToReading: [],
seeAlso: [],
),
],
sources: WordSearchSources.empty(),
);
wordSearchResult.inferMatchSpans('kana');
expect(wordSearchResult.matchSpans, [
WordSearchMatchSpan(
spanType: WordSearchMatchSpanType.sense,
start: 0,
end: 4,
index: 0,
),
]);
});
test('Infer multiple matches', () {
final wordSearchResult = WordSearchResult(
entryId: 0,
score: 0,
isCommon: false,
jlptLevel: JlptLevel.none,
kanjiInfo: {},
readingInfo: {},
japanese: [WordSearchRuby(base: '仮名', furigana: 'かな')],
senses: [
WordSearchSense(
antonyms: [],
dialects: [],
englishDefinitions: ['kana', 'the kana'],
fields: [],
info: [],
languageSource: [],
misc: [],
partsOfSpeech: [],
restrictedToKanji: [],
restrictedToReading: [],
seeAlso: [],
),
],
sources: WordSearchSources.empty(),
);
wordSearchResult.inferMatchSpans('kana');
expect(wordSearchResult.matchSpans, [
WordSearchMatchSpan(
spanType: WordSearchMatchSpanType.sense,
start: 0,
end: 4,
index: 0,
),
WordSearchMatchSpan(
spanType: WordSearchMatchSpanType.sense,
start: 4,
end: 8,
index: 0,
subIndex: 1,
),
]);
});
test('Infer match with no matches', () {
final wordSearchResult = WordSearchResult(
entryId: 0,
score: 0,
isCommon: false,
jlptLevel: JlptLevel.none,
kanjiInfo: {},
readingInfo: {},
japanese: [WordSearchRuby(base: '仮名', furigana: 'かな')],
senses: [
WordSearchSense(
antonyms: [],
dialects: [],
englishDefinitions: ['kana'],
fields: [],
info: [],
languageSource: [],
misc: [],
partsOfSpeech: [],
restrictedToKanji: [],
restrictedToReading: [],
seeAlso: [],
),
],
sources: WordSearchSources.empty(),
);
wordSearchResult.inferMatchSpans('xyz');
expect(wordSearchResult.matchSpans, isEmpty);
});
test('Infer multiple matches of same substring', () {
final wordSearchResult = WordSearchResult(
entryId: 0,
score: 0,
isCommon: false,
jlptLevel: JlptLevel.none,
kanjiInfo: {},
readingInfo: {},
japanese: [WordSearchRuby(base: 'ああ')],
senses: [],
sources: WordSearchSources.empty(),
);
wordSearchResult.inferMatchSpans('');
expect(wordSearchResult.matchSpans, [
WordSearchMatchSpan(
spanType: WordSearchMatchSpanType.kanji,
start: 0,
end: 1,
index: 0,
),
WordSearchMatchSpan(
spanType: WordSearchMatchSpanType.kanji,
start: 1,
end: 2,
index: 0,
),
]);
});
}

View File

@@ -3,22 +3,22 @@ import 'dart:io';
import 'package:jadb/_data_ingestion/open_local_db.dart';
import 'package:sqflite_common/sqlite_api.dart';
Future<Database> setup_database_connection() async {
final lib_sqlite_path = Platform.environment['LIBSQLITE_PATH'];
final jadb_path = Platform.environment['JADB_PATH'];
Future<Database> setupDatabaseConnection() async {
final libSqlitePath = Platform.environment['LIBSQLITE_PATH'];
final jadbPath = Platform.environment['JADB_PATH'];
if (lib_sqlite_path == null) {
throw Exception("LIBSQLITE_PATH is not set");
if (libSqlitePath == null) {
throw Exception('LIBSQLITE_PATH is not set');
}
if (jadb_path == null) {
throw Exception("JADB_PATH is not set");
if (jadbPath == null) {
throw Exception('JADB_PATH is not set');
}
final db_connection = await openLocalDb(
libsqlitePath: lib_sqlite_path,
jadbPath: jadb_path,
final dbConnection = await openLocalDb(
libsqlitePath: libSqlitePath,
jadbPath: jadbPath,
);
return db_connection;
return dbConnection;
}

View File

@@ -4,29 +4,59 @@ import 'package:test/test.dart';
import 'setup_database_connection.dart';
void main() {
test("Search a word", () async {
final connection = await setup_database_connection();
final result = await connection.jadbSearchWord("kana");
test('Search a word - english - auto', () async {
final connection = await setupDatabaseConnection();
final result = await connection.jadbSearchWord('kana');
expect(result, isNotNull);
});
test("Get a word by id", () async {
final connection = await setup_database_connection();
test('Get word search count - english - auto', () async {
final connection = await setupDatabaseConnection();
final result = await connection.jadbSearchWordCount('kana');
expect(result, isNotNull);
});
test('Search a word - japanese kana - auto', () async {
final connection = await setupDatabaseConnection();
final result = await connection.jadbSearchWord('かな');
expect(result, isNotNull);
});
test('Get word search count - japanese kana - auto', () async {
final connection = await setupDatabaseConnection();
final result = await connection.jadbSearchWordCount('かな');
expect(result, isNotNull);
});
test('Search a word - japanese kanji - auto', () async {
final connection = await setupDatabaseConnection();
final result = await connection.jadbSearchWord('仮名');
expect(result, isNotNull);
});
test('Get word search count - japanese kanji - auto', () async {
final connection = await setupDatabaseConnection();
final result = await connection.jadbSearchWordCount('仮名');
expect(result, isNotNull);
});
test('Get a word by id', () async {
final connection = await setupDatabaseConnection();
final result = await connection.jadbGetWordById(1577090);
expect(result, isNotNull);
});
test(
"Serialize all words",
'Serialize all words',
() async {
final connection = await setup_database_connection();
final connection = await setupDatabaseConnection();
// Test serializing all words
for (final letter in "aiueoksthnmyrw".split("")) {
for (final letter in 'aiueoksthnmyrw'.split('')) {
await connection.jadbSearchWord(letter);
}
},
timeout: Timeout.factor(100),
skip: "Very slow test",
skip: 'Very slow test',
);
}

View File

@@ -0,0 +1,51 @@
import 'package:jadb/util/lemmatizer/lemmatizer.dart';
import 'package:jadb/util/lemmatizer/rules/godan_verbs.dart';
import 'package:jadb/util/lemmatizer/rules/ichidan_verbs.dart';
import 'package:test/test.dart';
const List<String> ichidanVerbs = [
'食べる',
'食べた',
'食べさせられた',
'食べたい',
'食べたくない',
'食べたくなかった',
];
const List<String> godanVerbs = [
'泳ぐ',
'泳いだ',
'泳げる',
// '泳げれた',
];
bool findRuleRecursively(Lemmatized result, LemmatizationRule expectedRule) {
if (result.rule == expectedRule) {
return true;
}
for (final c in result.children) {
if (findRuleRecursively(c, expectedRule)) {
return true;
}
}
return false;
}
void main() {
group('Lemmatize Ichidan Verbs', () {
for (final v in ichidanVerbs) {
test('Lemmatize Ichidan Verb $v', () {
expect(findRuleRecursively(lemmatize(v), ichidanVerbBase), true);
});
}
});
group('Lemmatize Godan Verbs', () {
for (final v in godanVerbs) {
test('Lemmatize Godan Verb $v', () {
expect(findRuleRecursively(lemmatize(v), godanVerbBase), true);
});
}
});
}

View File

@@ -0,0 +1,14 @@
import 'package:jadb/util/lemmatizer/rules/godan_verbs.dart';
import 'package:test/test.dart';
void main() {
test('Test Godan Verb Base Rule', () {
expect(godanVerbBase.matches('泳ぐ'), true);
expect(godanVerbBase.apply('泳ぐ'), ['泳ぐ']);
});
test('Test Godan Verb Negative Rule', () {
expect(godanVerbNegative.matches('泳がない'), true);
expect(godanVerbNegative.apply('泳がない'), ['泳ぐ']);
});
}

View File

@@ -0,0 +1,15 @@
import 'package:jadb/util/lemmatizer/rules/i_adjectives.dart';
import 'package:test/test.dart';
void main() {
test('Test i-adjective Base Rule', () {
expect(iAdjectiveBase.matches('怪しい'), true);
expect(iAdjectiveBase.apply('怪しい'), ['怪しい']);
});
test('Test i-adjective Negative Rule', () {
expect(iAdjectiveNegative.matches('怪しくない'), true);
expect(iAdjectiveNegative.apply('怪しくない'), ['怪しい']);
});
}

View File

@@ -0,0 +1,14 @@
import 'package:jadb/util/lemmatizer/rules/ichidan_verbs.dart';
import 'package:test/test.dart';
void main() {
test('Test Ichidan Verb Base Rule', () {
expect(ichidanVerbBase.matches('食べる'), true);
expect(ichidanVerbBase.apply('食べる'), ['食べる']);
});
test('Test Ichidan Verb Negative Rule', () {
expect(ichidanVerbNegative.matches('食べない'), true);
expect(ichidanVerbNegative.apply('食べない'), ['食べる']);
});
}

View File

@@ -0,0 +1,15 @@
import 'package:jadb/util/lemmatizer/lemmatizer.dart';
import 'package:jadb/util/lemmatizer/rules.dart';
import 'package:test/test.dart';
void main() {
test('Assert lemmatizerRulesByWordClass is correct', () {
for (final entry in lemmatizationRulesByWordClass.entries) {
final WordClass wordClass = entry.key;
final List<LemmatizationRule> rules = entry.value;
for (final LemmatizationRule rule in rules) {
expect(wordClass, rule.wordClass);
}
}
});
}

View File

@@ -2,65 +2,121 @@ import 'package:jadb/util/romaji_transliteration.dart';
import 'package:test/test.dart';
void main() {
group("Romaji -> Hiragana", () {
test("Basic test", () {
final result = transliterateLatinToHiragana("katamari");
expect(result, "かたまり");
group('Romaji -> Hiragana', () {
test('Basic test', () {
final result = transliterateLatinToHiragana('katamari');
expect(result, 'かたまり');
});
test("Basic test with diacritics", () {
final result = transliterateLatinToHiragana("gadamari");
expect(result, "がだまり");
test('Basic test with diacritics', () {
final result = transliterateLatinToHiragana('gadamari');
expect(result, 'がだまり');
});
test("wi and we", () {
final result = transliterateLatinToHiragana("wiwe");
expect(result, "うぃうぇ");
test('wi and we', () {
final result = transliterateLatinToHiragana('wiwe');
expect(result, 'うぃうぇ');
});
test("nb = mb", () {
final result = transliterateLatinToHiragana("kanpai");
expect(result, "かんぱい");
test('nb = mb', () {
final result = transliterateLatinToHiragana('kanpai');
expect(result, 'かんぱい');
final result2 = transliterateLatinToHiragana("kampai");
expect(result2, "かんぱい");
final result2 = transliterateLatinToHiragana('kampai');
expect(result2, 'かんぱい');
});
test("Double n", () {
final result = transliterateLatinToHiragana("konnichiha");
expect(result, "こんにちは");
test('Double n', () {
final result = transliterateLatinToHiragana('konnichiha');
expect(result, 'こんにちは');
});
test("Double consonant", () {
final result = transliterateLatinToHiragana("kappa");
expect(result, "かっぱ");
test('Double consonant', () {
final result = transliterateLatinToHiragana('kappa');
expect(result, 'かっぱ');
});
});
group("Hiragana -> Romaji", () {
test("Basic test", () {
final result = transliterateHiraganaToLatin("かたまり");
expect(result, "katamari");
group('Romaji -> Hiragana Spans', () {
void Function() expectSpans(String input, List<String> expected) => () {
final result = transliterateLatinToHiraganaSpan(input);
final trans = transliterateLatinToHiragana(input);
for (int i = 0; i < result.length; i++) {
expect(
trans.substring(
result[i].$2,
i == result.length - 1 ? trans.length : result[i + 1].$2,
),
expected[i],
);
}
};
test('Basic test', expectSpans('katamari', ['', '', '', '']));
test(
'Basic test with diacritics',
expectSpans('gadamari', ['', '', '', '']),
);
test('wi and we', expectSpans('wiwe', ['うぃ', 'うぇ']));
test('nb = mb', expectSpans('kanpai', ['', '', '', '']));
test('nb = mb', expectSpans('kampai', ['', '', '', '']));
test('Double n', expectSpans('konnichiha', ['', '', '', '', '']));
// TODO: fix the implementation
// test('Double consonant', expectSpans('kappa', ['か', 'っぱ']));
});
group('Hiragana -> Romaji', () {
test('Basic test', () {
final result = transliterateHiraganaToLatin('かたまり');
expect(result, 'katamari');
});
test("Basic test with diacritics", () {
final result = transliterateHiraganaToLatin("がだまり");
expect(result, "gadamari");
test('Basic test with diacritics', () {
final result = transliterateHiraganaToLatin('がだまり');
expect(result, 'gadamari');
});
test("whi and whe", () {
final result = transliterateHiraganaToLatin("うぃうぇ");
expect(result, "whiwhe");
test('whi and whe', () {
final result = transliterateHiraganaToLatin('うぃうぇ');
expect(result, 'whiwhe');
});
test("Double n", () {
final result = transliterateHiraganaToLatin("こんにちは");
expect(result, "konnichiha");
test('Double n', () {
final result = transliterateHiraganaToLatin('こんにちは');
expect(result, 'konnichiha');
});
test("Double consonant", () {
final result = transliterateHiraganaToLatin("かっぱ");
expect(result, "kappa");
test('Double consonant', () {
final result = transliterateHiraganaToLatin('かっぱ');
expect(result, 'kappa');
});
});
group('Hiragana -> Romaji Spans', () {
void Function() expectSpans(String input, List<String> expected) => () {
final result = transliterateHiraganaToLatinSpan(input);
final trans = transliterateHiraganaToLatin(input);
for (int i = 0; i < result.length; i++) {
expect(
trans.substring(
result[i].$2,
i == result.length - 1 ? trans.length : result[i + 1].$2,
),
expected[i],
);
}
};
test('Basic test', expectSpans('かたまり', ['ka', 'ta', 'ma', 'ri']));
test(
'Basic test with diacritics',
expectSpans('がだまり', ['ga', 'da', 'ma', 'ri']),
);
test('wi and we', expectSpans('うぃうぇ', ['whi', 'whe']));
test('Double n', expectSpans('こんにちは', ['ko', 'n', 'ni', 'chi', 'ha']));
// TODO: fix the implementation
// test('Double consonant', expectSpans('かっぱ', ['ka', 'ppa']));
});
}