9 Commits

Author SHA1 Message Date
7bacfc39a8 WIP
Some checks failed
Build and test / build (push) Failing after 17m17s
2026-04-06 12:55:40 +09:00
c74a5f5cb6 flake.nix: build sqlite with ICU extension 2026-04-06 10:21:59 +09:00
4fbed59143 flake.nix: split off sqlite debugging tools into separate devshell 2026-04-06 10:21:11 +09:00
61ac226fc3 word_search_result: add getter for unusual kanji flag
All checks were successful
Build and test / build (push) Successful in 11m48s
2026-04-02 15:53:39 +09:00
ede57a7a00 docs: init
All checks were successful
Build and test / build (push) Successful in 11m51s
2026-04-01 16:48:40 +09:00
2ad1e038f1 tanos-jlpt: remove flatten from xml stream
All checks were successful
Build and test / build (push) Successful in 13m41s
This was earlier used to compensate for a double nesting bug. This has
been fixed in the latest version of the xml package.
2026-04-01 16:04:44 +09:00
f40825de65 jmdict: skip inserting duplicate xrefs 2026-04-01 16:03:56 +09:00
5aa068eaec flake.nix: add sqldiff to devshell
Some checks failed
Build and test / build (push) Failing after 12m0s
2026-04-01 15:27:25 +09:00
170c3a853e flake.lock: bump, pubspec.lock: update inputs
Some checks failed
Build and test / build (push) Failing after 10m4s
2026-03-26 22:18:10 +09:00
15 changed files with 278 additions and 211 deletions

View File

@@ -18,26 +18,4 @@ Note that while the license for the code is MIT, the data has various licenses.
| **Tanos JLPT levels:** | https://www.tanos.co.uk/jlpt/ |
| **Kangxi Radicals:** | https://ctext.org/kangxi-zidian |
## Implementation details
### Word search
The word search procedure is currently split into 3 parts:
1. **Entry ID query**:
Use a complex query with various scoring factors to try to get list of
database ids pointing at dictionary entries, sorted by how likely we think this
word is the word that the caller is looking for. The output here is a `List<int>`
2. **Data Query**:
Takes the entry id list from the last search, and performs all queries needed to retrieve
all the dictionary data for those IDs. The result is a struct with a bunch of flattened lists
with data for all the dictionary entries. These lists are sorted by the order that the ids
were provided.
3. **Regrouping**:
Takes the flattened data, and regroups the items into structs with a more "hierarchical" structure.
All data tagged with the same ID will end up in the same struct. Returns a list of these structs.
See [docs/overview.md](./docs/overview.md) for notes and implementation details.

13
docs/lemmatizer.md Normal file
View File

@@ -0,0 +1,13 @@
# Lemmatizer
The lemmatizer is still quite experimental, but will play a more important role in the project in the future.
It is a manual implementation of a [Finite State Transducer](https://en.wikipedia.org/wiki/Morphological_dictionary#Finite_State_Transducers) for morphological parsing. The FST is used to recursively remove affixes from a word until it (hopefully) deconjugates into its dictionary form. This iterative deconjugation tree will then be combined with queries into the dictionary data to determine if the deconjugation leads to a real known word.
Each separate rule is a separate static object declared in `lib/util/lemmatizer/rules`.
There is a cli subcommand for testing the tool interactively, you can run
```bash
dart run jadb lemmatize -w '食べさせられない'
```

27
docs/overview.md Normal file
View File

@@ -0,0 +1,27 @@
# Overview
This is the documentation for `jadb`. Since I'm currently the only one working on it, the documentation is more or less just notes to myself, to ensure I remember how and why I implemented certain features in a certain way a few months down the road. This is not a comprehensive and formal documentation for downstream use, neither for developers nor end-users.
- [Word Search](./word-search.md)
- [Lemmatizer](./lemmatizer.md)
## Project structure
- `lib/_data_ingestion` contains all the code for reading data sources, transforming them and compiling them into an SQLite database. This is for the most part isolated from the rest of the codebase, and should not be depended on by any code used for querying the database.
- `lib/cli` contains code for cli tooling (e.g. argument parsing, subcommand handling, etc.)
- `lib/const_data` contains database data that is small enough to warrant being hardcoded as dart constants.
- `lib/models` contains all the code for representing the database schema as Dart classes, and for converting between those classes and the actual database.
- `lib/search` contains all the code for searching the database.
- `lib/util/lemmatizer` contains the code for lemmatization, which will be used by the search code in the future.
- `migrations` contains raw SQL files for creating the database schema.
## SQLite naming conventions
> [!WARNING]
> All of these conventions are actually not enforced yet, it will be fixed at some point.
- Indices are prefixed with `IDX__`
- Crossref tables are prefixed with `XREF__`
- Trigger names are prefixed with `TRG__`
- Views are prefixed with `VW__`
- All data sources should have a `<datasource>_Version` table, which contains a single row with the version of the data source used to generate the database.

21
docs/word-search.md Normal file
View File

@@ -0,0 +1,21 @@
# Word search
The word search procedure is currently split into 3 parts:
1. **Entry ID query**:
Use a complex query with various scoring factors to try to get list of
database ids pointing at dictionary entries, sorted by how likely we think this
word is the word that the caller is looking for. The output here is a `List<int>`
2. **Data Query**:
Takes the entry id list from the last search, and performs all queries needed to retrieve
all the dictionary data for those IDs. The result is a struct with a bunch of flattened lists
with data for all the dictionary entries. These lists are sorted by the order that the ids
were provided.
3. **Regrouping**:
Takes the flattened data, and regroups the items into structs with a more "hierarchical" structure.
All data tagged with the same ID will end up in the same struct. Returns a list of these structs.

12
flake.lock generated
View File

@@ -3,7 +3,7 @@
"jmdict-src": {
"flake": false,
"locked": {
"narHash": "sha256-lh46uougUzBrRhhwa7cOb32j5Jt9/RjBUhlVjwVzsII=",
"narHash": "sha256-eOc3a/AYNRFF3w6lWhyf0Sh92xeXS7+9Qvn0tvvH6Ys=",
"type": "file",
"url": "http://ftp.edrdg.org/pub/Nihongo/JMdict_e.gz"
},
@@ -15,7 +15,7 @@
"jmdict-with-examples-src": {
"flake": false,
"locked": {
"narHash": "sha256-5oS2xDyetbuSM6ax3LUjYA3N60x+D3Hg41HEXGFMqLQ=",
"narHash": "sha256-nx+WMkscWvA/XImKM7NESYVmICwSgXWOO1KPXasHY94=",
"type": "file",
"url": "http://ftp.edrdg.org/pub/Nihongo/JMdict_e_examp.gz"
},
@@ -27,7 +27,7 @@
"kanjidic2-src": {
"flake": false,
"locked": {
"narHash": "sha256-orSeQqSxhn9TtX3anYtbiMEm7nFkuomGnIKoVIUR2CM=",
"narHash": "sha256-2T/cAS/kZmVMURStgHVhz524+J9+v5onKs8eEYf2fY0=",
"type": "file",
"url": "https://www.edrdg.org/kanjidic/kanjidic2.xml.gz"
},
@@ -38,11 +38,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1771848320,
"narHash": "sha256-0MAd+0mun3K/Ns8JATeHT1sX28faLII5hVLq0L3BdZU=",
"lastModified": 1774386573,
"narHash": "sha256-4hAV26quOxdC6iyG7kYaZcM3VOskcPUrdCQd/nx8obc=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "2fc6539b481e1d2569f25f8799236694180c0993",
"rev": "46db2e09e1d3f113a13c0d7b81e2f221c63b8ce9",
"type": "github"
},
"original": {

View File

@@ -43,7 +43,12 @@
"armv7l-linux"
];
forAllSystems = f: lib.genAttrs systems (system: f system nixpkgs.legacyPackages.${system});
forAllSystems = f: lib.genAttrs systems (system: let
pkgs = import nixpkgs {
inherit system;
overlays = [ self.overlays.sqlite-icu-ext ];
};
in f system pkgs);
in {
apps = forAllSystems (system: pkgs: {
default = {
@@ -77,15 +82,12 @@
devShells = forAllSystems (system: pkgs: {
default = pkgs.mkShell {
buildInputs = with pkgs; [
packages = with pkgs; [
dart
gnumake
lcov
sqlite-analyzer
sqlite-interactive
sqlite-web
# sqlint
sqlfluff
sqldiff
sqlite-interactive-icu-ext
];
env = {
LIBSQLITE_PATH = "${pkgs.sqlite.out}/lib/libsqlite3.so";
@@ -93,8 +95,34 @@
LD_LIBRARY_PATH = lib.makeLibraryPath [ pkgs.sqlite ];
};
};
sqlite-debugging = pkgs.mkShell {
packages = with pkgs; [
sqlite-interactive-icu-ext
sqlite-analyzer
sqlite-web
sqlint
sqlfluff
];
};
});
overlays.sqlite-icu-ext = final: prev: let
overrideArgs = prev': {
configureFlags = prev'.configureFlags ++ [
"--with-icu-config=${lib.getExe' prev.icu.dev "icu-config"}"
"--enable-icu-collations"
];
buildInputs = prev'.buildInputs ++ [
prev.icu
];
};
in {
sqlite-icu-ext = prev.sqlite.overrideAttrs overrideArgs;
sqlite-interactive-icu-ext = prev.sqlite-interactive.overrideAttrs overrideArgs;
};
packages = let
edrdgMetadata = {
license = [{
@@ -128,6 +156,8 @@
ln -s ${src} $out
'';
inherit (pkgs) sqlite-icu-ext sqlite-interactive-icu-ext;
jmdict = pkgs.callPackage ./nix/jmdict.nix {
inherit jmdict-src jmdict-with-examples-src edrdgMetadata;
};
@@ -142,17 +172,20 @@
database-tool = pkgs.callPackage ./nix/database_tool.nix {
inherit src;
sqlite = pkgs.sqlite-icu-ext;
};
database = pkgs.callPackage ./nix/database.nix {
inherit (self.packages.${system}) database-tool jmdict radkfile kanjidic2;
inherit src;
sqlite = pkgs.sqlite-icu-ext;
};
database-wal = pkgs.callPackage ./nix/database.nix {
inherit (self.packages.${system}) database-tool jmdict radkfile kanjidic2;
inherit src;
wal = true;
sqlite = pkgs.sqlite-icu-ext;
};
docs = pkgs.callPackage ./nix/docs.nix {

View File

@@ -5,13 +5,23 @@ import 'package:jadb/_data_ingestion/jmdict/objects.dart';
import 'package:jadb/table_names/jmdict.dart';
import 'package:sqflite_common/sqlite_api.dart';
/// A wrapper for the result of resolving an xref, which includes the resolved entry and a flag
/// indicating whether the xref was ambiguous (i.e. could refer to multiple entries).
class ResolvedXref {
Entry entry;
bool ambiguous;
int? senseOrderNum;
ResolvedXref(this.entry, this.ambiguous);
ResolvedXref(this.entry, this.ambiguous, senseOrderNum);
}
/// Resolves an xref (pair of kanji, optionally reading, and optionally sense number) to an a specific
/// JMdict entry, if possible.
///
/// If the xref is ambiguous (i.e. it could refer to multiple entries), the
/// first entry is returned, and the returned value is marked as ambiguous.
///
/// If the xref cannot be resolved to any entry at all, an exception is thrown.
ResolvedXref resolveXref(
SplayTreeMap<String, Set<Entry>> entriesByKanji,
SplayTreeMap<String, Set<Entry>> entriesByReading,
@@ -65,9 +75,10 @@ ResolvedXref resolveXref(
'kanjiRef: ${xref.kanjiRef}, readingRef: ${xref.readingRef}, '
'senseOrderNum: ${xref.senseOrderNum}',
);
return ResolvedXref(candidateEntries.first, true);
return ResolvedXref(candidateEntries.first, true, xref.senseOrderNum);
} else {
return ResolvedXref(candidateEntries.first, false);
return ResolvedXref(candidateEntries.first, false, xref.senseOrderNum);
}
}
@@ -152,14 +163,14 @@ Future<void> seedJMDictData(List<Entry> entries, Database db) async {
b.insert(JMdictTableNames.senseRestrictedToKanji, {
'entryId': e.entryId,
'senseId': s.senseId,
'kanji': rk,
'kanjiOrderNum': e.kanji.indexWhere((k) => k.reading == rk) + 1,
});
}
for (final rr in s.restrictedToReading) {
b.insert(JMdictTableNames.senseRestrictedToReading, {
'entryId': e.entryId,
'senseId': s.senseId,
'reading': rr,
'readingOrderNum': e.readings.indexWhere((r) => r.reading == rr) + 1,
});
}
for (final ls in s.languageSource) {
@@ -181,24 +192,17 @@ Future<void> seedJMDictData(List<Entry> entries, Database db) async {
print(' [JMdict] Building xref trees');
final SplayTreeMap<String, Set<Entry>> entriesByKanji = SplayTreeMap();
final SplayTreeMap<String, Set<Entry>> entriesByReading = SplayTreeMap();
for (final entry in entries) {
for (final kanji in entry.kanji) {
if (entriesByKanji.containsKey(kanji.reading)) {
entriesByKanji.update(kanji.reading, (list) => list..add(entry));
} else {
entriesByKanji.putIfAbsent(kanji.reading, () => {entry});
}
entriesByKanji.putIfAbsent(kanji.reading, () => {});
entriesByKanji.update(kanji.reading, (set) => set..add(entry));
}
}
final SplayTreeMap<String, Set<Entry>> entriesByReading = SplayTreeMap();
for (final entry in entries) {
for (final reading in entry.readings) {
if (entriesByReading.containsKey(reading.reading)) {
entriesByReading.update(reading.reading, (list) => list..add(entry));
} else {
entriesByReading.putIfAbsent(reading.reading, () => {entry});
}
entriesByReading.putIfAbsent(reading.reading, () => {});
entriesByReading.update(reading.reading, (set) => set..add(entry));
}
}
@@ -207,6 +211,7 @@ Future<void> seedJMDictData(List<Entry> entries, Database db) async {
for (final e in entries) {
for (final s in e.senses) {
final seenSeeAlsoXrefs = <int>{};
for (final xref in s.seeAlso) {
final resolvedEntry = resolveXref(
entriesByKanji,
@@ -214,16 +219,24 @@ Future<void> seedJMDictData(List<Entry> entries, Database db) async {
xref,
);
if (seenSeeAlsoXrefs.contains(resolvedEntry.entry.entryId)) {
print(
'WARNING: Skipping duplicate seeAlso xref from sense ${s.senseId} to entry ${resolvedEntry.entry.entryId}\n'
' (kanjiRef: ${xref.kanjiRef}, readingRef: ${xref.readingRef}, senseOrderNum: ${xref.senseOrderNum})',
);
continue;
}
seenSeeAlsoXrefs.add(resolvedEntry.entry.entryId);
b.insert(JMdictTableNames.senseSeeAlso, {
'senseId': s.senseId,
'xrefEntryId': resolvedEntry.entry.entryId,
'seeAlsoKanji': xref.kanjiRef,
'seeAlsoReading': xref.readingRef,
'seeAlsoSense': xref.senseOrderNum,
'xrefSenseOrderNum': resolvedEntry.senseOrderNum,
'ambiguous': resolvedEntry.ambiguous,
});
}
final seenAntonymXrefs = <int>{};
for (final ant in s.antonyms) {
final resolvedEntry = resolveXref(
entriesByKanji,
@@ -231,12 +244,18 @@ Future<void> seedJMDictData(List<Entry> entries, Database db) async {
ant,
);
if (seenAntonymXrefs.contains(resolvedEntry.entry.entryId)) {
print(
'WARNING: Skipping duplicate antonym xref from sense ${s.senseId} to entry ${resolvedEntry.entry.entryId}\n'
' (kanjiRef: ${ant.kanjiRef}, readingRef: ${ant.readingRef}, senseOrderNum: ${ant.senseOrderNum})',
);
continue;
}
seenAntonymXrefs.add(resolvedEntry.entry.entryId);
b.insert(JMdictTableNames.senseAntonyms, {
'senseId': s.senseId,
'xrefEntryId': resolvedEntry.entry.entryId,
'antonymKanji': ant.kanjiRef,
'antonymReading': ant.readingRef,
'antonymSense': ant.senseOrderNum,
'ambiguous': resolvedEntry.ambiguous,
});
}

View File

@@ -27,6 +27,7 @@ Future<Database> openLocalDb({
await db.execute('PRAGMA journal_mode = WAL');
}
await db.execute('PRAGMA foreign_keys = ON');
await db.execute("SELECT icu_load_collation('ja_JP', 'japanese')");
},
readOnly: !readWrite,
),

View File

@@ -10,11 +10,12 @@ Future<List<JLPTRankedWord>> parseJLPTRankedWords(
) async {
final List<JLPTRankedWord> result = [];
final codec = CsvCodec(
final codec = Csv(
fieldDelimiter: ',',
lineDelimiter: '\n',
quoteMode: QuoteMode.strings,
escapeCharacter: '\\',
parseHeaders: false,
);
for (final entry in files.entries) {
@@ -29,7 +30,6 @@ Future<List<JLPTRankedWord>> parseJLPTRankedWords(
.openRead()
.transform(utf8.decoder)
.transform(codec.decoder)
.flatten()
.map((row) {
if (row.length != 3) {
throw Exception('Invalid line in $jlptLevel: $row');

View File

@@ -1,5 +1,6 @@
import 'package:jadb/models/common/jlpt_level.dart';
import 'package:jadb/models/jmdict/jmdict_kanji_info.dart';
import 'package:jadb/models/jmdict/jmdict_misc.dart';
import 'package:jadb/models/jmdict/jmdict_reading_info.dart';
import 'package:jadb/models/word_search/word_search_match_span.dart';
import 'package:jadb/models/word_search/word_search_ruby.dart';
@@ -45,6 +46,13 @@ class WordSearchResult {
/// the original searchword.
List<WordSearchMatchSpan>? matchSpans;
/// Whether the first item in [japanese] contains kanji that likely is rare.
bool get hasUnusualKanji =>
(japanese.first.furigana != null &&
kanjiInfo[japanese.first.base] == JMdictKanjiInfo.rK) ||
senses.where((sense) => sense.misc.contains(JMdictMisc.onlyKana)).length >
(senses.length / 2);
/// All contents of [japanese], transliterated to romaji
List<String> get romaji => japanese
.map((word) => transliterateKanaToLatin(word.furigana ?? word.base))

View File

@@ -21,83 +21,49 @@ List<WordSearchResult> regroupWordSearchResults({
}) {
final List<WordSearchResult> results = [];
final Set<int> commonEntryIds = linearWordQueryData.commonEntries
final commonEntryIds = linearWordQueryData.commonEntries
.map((entry) => entry['entryId'] as int)
.toSet();
final Map<int, List<Map<String, Object?>>> entryReadingElementsByEntryId =
linearWordQueryData.readingElements.groupListsBy(
(element) => element['entryId'] as int,
);
final Map<int, List<Map<String, Object?>>> entryKanjiElementsByEntryId =
linearWordQueryData.kanjiElements.groupListsBy(
(element) => element['entryId'] as int,
);
final Map<int, int> elementIdToEntryId = {
for (final element in linearWordQueryData.readingElements)
element['elementId'] as int: element['entryId'] as int,
for (final element in linearWordQueryData.kanjiElements)
element['elementId'] as int: element['entryId'] as int,
};
final Map<int, List<Map<String, Object?>>> entryReadingElementInfosByEntryId =
linearWordQueryData.readingElementInfos.groupListsBy(
(element) => elementIdToEntryId[element['elementId'] as int]!,
);
final Map<int, List<Map<String, Object?>>> entryKanjiElementInfosByEntryId =
linearWordQueryData.kanjiElementInfos.groupListsBy(
(element) => elementIdToEntryId[element['elementId'] as int]!,
);
final Map<int, List<Map<String, Object?>>>
entryReadingElementRestrictionsByEntryId = linearWordQueryData
.readingElementRestrictions
.groupListsBy(
(element) => elementIdToEntryId[element['elementId'] as int]!,
);
final Map<int, JlptLevel> entryJlptTagsByEntryId = linearWordQueryData
.jlptTags
.groupSetsBy((element) => element['entryId'] as int)
.map(
(final key, final value) => MapEntry(
key,
value.map((e) => JlptLevel.fromString(e['jlptLevel'] as String?)).min,
),
);
final Map<int, List<Map<String, Object?>>> entrySensesByEntryId =
linearWordQueryData.senses.groupListsBy(
(element) => element['entryId'] as int,
);
for (final scoredEntryId in entryIds) {
final List<Map<String, Object?>> entryReadingElements =
entryReadingElementsByEntryId[scoredEntryId.entryId] ?? const [];
final List<Map<String, Object?>> entryKanjiElements =
entryKanjiElementsByEntryId[scoredEntryId.entryId] ?? const [];
final List<Map<String, Object?>> entryReadingElementInfos =
entryReadingElementInfosByEntryId[scoredEntryId.entryId] ?? const [];
final List<Map<String, Object?>> entryKanjiElementInfos =
entryKanjiElementInfosByEntryId[scoredEntryId.entryId] ?? const [];
final List<Map<String, Object?>> entryReadingElementRestrictions =
entryReadingElementRestrictionsByEntryId[scoredEntryId.entryId] ??
const [];
final List<Map<String, Object?>> entryReadingElements = linearWordQueryData
.readingElements
.where((element) => element['entryId'] == scoredEntryId.entryId)
.toList();
final List<Map<String, Object?>> entryKanjiElements = linearWordQueryData
.kanjiElements
.where((element) => element['entryId'] == scoredEntryId.entryId)
.toList();
final List<Map<String, Object?>> entryJlptTags = linearWordQueryData
.jlptTags
.where((element) => element['entryId'] == scoredEntryId.entryId)
.toList();
final jlptLevel =
entryJlptTags
.map((e) => JlptLevel.fromString(e['jlptLevel'] as String?))
.sorted((a, b) => b.compareTo(a))
.firstOrNull ??
JlptLevel.none;
final isCommon = commonEntryIds.contains(scoredEntryId.entryId);
final List<Map<String, Object?>> entrySenses = linearWordQueryData.senses
.where((element) => element['entryId'] == scoredEntryId.entryId)
.toList();
final GroupedWordResult entryReadingElementsGrouped = _regroupWords(
kanjiElements: entryKanjiElements,
kanjiElementInfos: entryKanjiElementInfos,
entryId: scoredEntryId.entryId,
readingElements: entryReadingElements,
readingElementInfos: entryReadingElementInfos,
readingElementRestrictions: entryReadingElementRestrictions,
kanjiElements: entryKanjiElements,
readingElementInfos: linearWordQueryData.readingElementInfos,
readingElementRestrictions:
linearWordQueryData.readingElementRestrictions,
kanjiElementInfos: linearWordQueryData.kanjiElementInfos,
);
final List<Map<String, Object?>> entrySenses =
entrySensesByEntryId[scoredEntryId.entryId] ?? const [];
final List<WordSearchSense> entrySensesGrouped = _regroupSenses(
senses: entrySenses,
senseAntonyms: linearWordQueryData.senseAntonyms,
@@ -116,10 +82,6 @@ List<WordSearchResult> regroupWordSearchResults({
senseAntonymsXrefData: linearWordQueryData.senseAntonymData,
);
final bool isCommon = commonEntryIds.contains(scoredEntryId.entryId);
final JlptLevel jlptLevel =
entryJlptTagsByEntryId[scoredEntryId.entryId] ?? JlptLevel.none;
results.add(
WordSearchResult(
score: scoredEntryId.score,
@@ -151,6 +113,7 @@ class GroupedWordResult {
}
GroupedWordResult _regroupWords({
required int entryId,
required List<Map<String, Object?>> kanjiElements,
required List<Map<String, Object?>> kanjiElementInfos,
required List<Map<String, Object?>> readingElements,
@@ -159,34 +122,36 @@ GroupedWordResult _regroupWords({
}) {
final List<WordSearchRuby> rubys = [];
final Map<int, Set<String>> readingElementRestrictionsSet =
readingElementRestrictions
.groupSetsBy((element) => element['elementId'] as int)
.map(
(key, value) => MapEntry(
key,
value.map((e) => e['restriction'] as String).toSet(),
),
);
final kanjiElements_ = kanjiElements
.where((element) => element['entryId'] == entryId)
.toList();
// Construct a cartesian product of kanji + readings, with exceptions made for items marked in `restrictions`.
for (final readingElement in readingElements) {
if (readingElement['doesNotMatchKanji'] == 1 || kanjiElements.isEmpty) {
final readingElements_ = readingElements
.where((element) => element['entryId'] == entryId)
.toList();
final readingElementRestrictions_ = readingElementRestrictions
.where((element) => element['entryId'] == entryId)
.toList();
for (final readingElement in readingElements_) {
if (readingElement['doesNotMatchKanji'] == 1 || kanjiElements_.isEmpty) {
final ruby = WordSearchRuby(base: readingElement['reading'] as String);
rubys.add(ruby);
continue;
}
for (final kanjiElement in kanjiElements) {
for (final kanjiElement in kanjiElements_) {
final kanji = kanjiElement['reading'] as String;
final reading = readingElement['reading'] as String;
// The 'restrictions' act as an allowlist, meaning that non-matching kanji elements should be ignored.
final restrictions =
readingElementRestrictionsSet[readingElement['elementId'] as int] ??
{};
if (restrictions.isNotEmpty && !restrictions.contains(kanji)) {
final restrictions = readingElementRestrictions_
.where((element) => element['reading'] == reading)
.toList();
if (restrictions.isNotEmpty &&
!restrictions.any((element) => element['restriction'] == kanji)) {
continue;
}
@@ -195,30 +160,35 @@ GroupedWordResult _regroupWords({
}
}
assert(
rubys.isNotEmpty,
'No readings found for entryId: ${kanjiElements.firstOrNull?['entryId'] ?? readingElements.firstOrNull?['entryId'] ?? '???'}',
);
assert(rubys.isNotEmpty, 'No readings found for entryId: $entryId');
final Map<int, String> readingElementIdsToReading = {
for (final element in readingElements)
for (final element in readingElements_)
element['elementId'] as int: element['reading'] as String,
};
final Map<int, String> kanjiElementIdsToReading = {
for (final element in kanjiElements)
for (final element in kanjiElements_)
element['elementId'] as int: element['reading'] as String,
};
final readingElementInfos_ = readingElementInfos
.where((element) => element['entryId'] == entryId)
.toList();
final kanjiElementInfos_ = kanjiElementInfos
.where((element) => element['entryId'] == entryId)
.toList();
return GroupedWordResult(
rubys: rubys,
readingInfos: {
for (final rei in readingElementInfos)
for (final rei in readingElementInfos_)
readingElementIdsToReading[rei['elementId'] as int]!:
JMdictReadingInfo.fromId(rei['info'] as String),
},
kanjiInfos: {
for (final kei in kanjiElementInfos)
for (final kei in kanjiElementInfos_)
kanjiElementIdsToReading[kei['elementId'] as int]!:
JMdictKanjiInfo.fromId(kei['info'] as String),
},

View File

@@ -1,3 +1,5 @@
SELECT icu_load_collation('ja_JP', 'japanese');
CREATE TABLE "JMdict_Version" (
"version" VARCHAR(10) PRIMARY KEY NOT NULL,
"date" DATE NOT NULL,
@@ -55,13 +57,13 @@ CREATE TABLE "JMdict_KanjiElement" (
"elementId" INTEGER PRIMARY KEY,
"entryId" INTEGER NOT NULL REFERENCES "JMdict_Entry"("entryId"),
"orderNum" INTEGER NOT NULL,
"reading" TEXT NOT NULL,
"reading" TEXT NOT NULL COLLATE japanese,
"news" INTEGER CHECK ("news" BETWEEN 1 AND 2),
"ichi" INTEGER CHECK ("ichi" BETWEEN 1 AND 2),
"spec" INTEGER CHECK ("spec" BETWEEN 1 AND 2),
"gai" INTEGER CHECK ("gai" BETWEEN 1 AND 2),
"nf" INTEGER CHECK ("nf" BETWEEN 1 AND 48),
UNIQUE("entryId", "reading"),
-- UNIQUE("entryId", "reading"),
UNIQUE("entryId", "orderNum")
) WITHOUT ROWID;
@@ -80,14 +82,14 @@ CREATE TABLE "JMdict_ReadingElement" (
"elementId" INTEGER PRIMARY KEY,
"entryId" INTEGER NOT NULL REFERENCES "JMdict_Entry"("entryId"),
"orderNum" INTEGER NOT NULL,
"reading" TEXT NOT NULL,
"reading" TEXT NOT NULL COLLATE japanese,
"readingDoesNotMatchKanji" BOOLEAN NOT NULL DEFAULT FALSE,
"news" INTEGER CHECK ("news" BETWEEN 1 AND 2),
"ichi" INTEGER CHECK ("ichi" BETWEEN 1 AND 2),
"spec" INTEGER CHECK ("spec" BETWEEN 1 AND 2),
"gai" INTEGER CHECK ("gai" BETWEEN 1 AND 2),
"nf" INTEGER CHECK ("nf" BETWEEN 1 AND 48),
UNIQUE("entryId", "reading"),
-- UNIQUE("entryId", "reading"),
UNIQUE("entryId", "orderNum")
) WITHOUT ROWID;
@@ -120,17 +122,17 @@ CREATE INDEX "JMdict_Sense_byEntryId_byOrderNum" ON "JMdict_Sense"("entryId", "o
CREATE TABLE "JMdict_SenseRestrictedToKanji" (
"entryId" INTEGER NOT NULL,
"senseId" INTEGER NOT NULL REFERENCES "JMdict_Sense"("senseId"),
"kanji" TEXT NOT NULL,
FOREIGN KEY ("entryId", "kanji") REFERENCES "JMdict_KanjiElement"("entryId", "reading"),
PRIMARY KEY ("entryId", "senseId", "kanji")
"kanjiOrderNum" INTEGER NOT NULL CHECK ("kanjiOrderNum" > 0),
FOREIGN KEY ("entryId", "kanjiOrderNum") REFERENCES "JMdict_KanjiElement"("entryId", "orderNum"),
PRIMARY KEY ("entryId", "senseId", "kanjiOrderNum")
) WITHOUT ROWID;
CREATE TABLE "JMdict_SenseRestrictedToReading" (
"entryId" INTEGER NOT NULL,
"senseId" INTEGER NOT NULL REFERENCES "JMdict_Sense"("senseId"),
"reading" TEXT NOT NULL,
FOREIGN KEY ("entryId", "reading") REFERENCES "JMdict_ReadingElement"("entryId", "reading"),
PRIMARY KEY ("entryId", "senseId", "reading")
"readingOrderNum" INTEGER NOT NULL CHECK ("readingOrderNum" > 0),
FOREIGN KEY ("entryId", "readingOrderNum") REFERENCES "JMdict_ReadingElement"("entryId", "orderNum"),
PRIMARY KEY ("entryId", "senseId", "readingOrderNum")
) WITHOUT ROWID;
-- In order to add xrefs, you will need to have added the entry to xref to.
@@ -145,32 +147,23 @@ CREATE TABLE "JMdict_SenseRestrictedToReading" (
CREATE TABLE "JMdict_SenseSeeAlso" (
"senseId" INTEGER NOT NULL REFERENCES "JMdict_Sense"("senseId"),
"xrefEntryId" INTEGER NOT NULL,
"seeAlsoReading" TEXT,
"seeAlsoKanji" TEXT,
"seeAlsoSense" INTEGER,
"xrefEntryId" INTEGER NOT NULL REFERENCES "JMdict_Entry"("entryId"),
-- Sometimes the cross reference is to a specific sense
"xrefSenseOrderNum" INTEGER,
-- For some entries, the cross reference is ambiguous. This means that while the ingestion
-- has determined some xrefEntryId, it is not guaranteed to be the correct one.
"ambiguous" BOOLEAN NOT NULL DEFAULT FALSE,
FOREIGN KEY ("xrefEntryId", "seeAlsoKanji") REFERENCES "JMdict_KanjiElement"("entryId", "reading"),
FOREIGN KEY ("xrefEntryId", "seeAlsoReading") REFERENCES "JMdict_ReadingElement"("entryId", "reading"),
FOREIGN KEY ("xrefEntryId", "seeAlsoSense") REFERENCES "JMdict_Sense"("entryId", "orderNum"),
UNIQUE("senseId", "xrefEntryId", "seeAlsoReading", "seeAlsoKanji", "seeAlsoSense")
FOREIGN KEY ("xrefEntryId", "xrefSenseOrderNum") REFERENCES "JMdict_Sense"("entryId", "orderNum"),
UNIQUE("senseId", "xrefEntryId", "xrefSenseOrderNum")
);
CREATE TABLE "JMdict_SenseAntonym" (
"senseId" INTEGER NOT NULL REFERENCES "JMdict_Sense"("senseId"),
"xrefEntryId" INTEGER NOT NULL,
"antonymReading" TEXT,
"antonymKanji" TEXT,
"antonymSense" INTEGER,
"xrefEntryId" INTEGER NOT NULL REFERENCES "JMdict_Entry"("entryId"),
-- For some entries, the cross reference is ambiguous. This means that while the ingestion
-- has determined some xrefEntryId, it is not guaranteed to be the correct one.
"ambiguous" BOOLEAN NOT NULL DEFAULT FALSE,
FOREIGN KEY ("xrefEntryId", "antonymKanji") REFERENCES "JMdict_KanjiElement"("entryId", "reading"),
FOREIGN KEY ("xrefEntryId", "antonymReading") REFERENCES "JMdict_ReadingElement"("entryId", "reading"),
FOREIGN KEY ("xrefEntryId", "antonymSense") REFERENCES "JMdict_Sense"("entryId", "orderNum"),
UNIQUE("senseId", "xrefEntryId", "antonymReading", "antonymKanji", "antonymSense")
UNIQUE("senseId", "xrefEntryId")
);
-- These cross references are going to be mostly accessed from a sense

View File

@@ -1,6 +1,7 @@
{
src,
buildDartApplication,
sqlite,
}:
buildDartApplication {
pname = "jadb-database-tool";
@@ -9,6 +10,9 @@ buildDartApplication {
dartEntryPoints."bin/jadb" = "bin/jadb.dart";
# NOTE: here we are overriding the implicitly added runtimeDependency from the package fixup in pub2nix.
runtimeDependencies = [ sqlite ];
# NOTE: the default dart hooks are using `dart compile`, which is not able to call the
# new dart build hooks required to use package:sqlite3 >= 3.0.0. So we override
# these phases to use `dart build` instead.

View File

@@ -5,18 +5,18 @@ packages:
dependency: transitive
description:
name: _fe_analyzer_shared
sha256: "3b19a47f6ea7c2632760777c78174f47f6aec1e05f0cd611380d4593b8af1dbc"
sha256: "8d718c5c58904f9937290fd5dbf2d6a0e02456867706bfb6cd7b81d394e738d5"
url: "https://pub.dev"
source: hosted
version: "96.0.0"
version: "98.0.0"
analyzer:
dependency: transitive
description:
name: analyzer
sha256: "0c516bc4ad36a1a75759e54d5047cb9d15cded4459df01aa35a0b5ec7db2c2a0"
sha256: "6141ad5d092d1e1d13929c0504658bbeccc1703505830d7c26e859908f5efc88"
url: "https://pub.dev"
source: hosted
version: "10.2.0"
version: "12.0.0"
args:
dependency: "direct main"
description:
@@ -29,10 +29,10 @@ packages:
dependency: transitive
description:
name: async
sha256: "758e6d74e971c3e5aceb4110bfd6698efc7f501675bcfe0c775459a8140750eb"
sha256: e2eb0491ba5ddb6177742d2da23904574082139b07c1e33b8503b9f46f3e1a37
url: "https://pub.dev"
source: hosted
version: "2.13.0"
version: "2.13.1"
benchmark_harness:
dependency: "direct dev"
description:
@@ -101,10 +101,10 @@ packages:
dependency: "direct main"
description:
name: csv
sha256: bef2950f7a753eb82f894a2eabc3072e73cf21c17096296a5a992797e50b1d0d
sha256: "2e0a52fb729f2faacd19c9c0c954ff450bba37aa8ab999410309e2342e7013a2"
url: "https://pub.dev"
source: hosted
version: "7.1.0"
version: "8.0.0"
equatable:
dependency: "direct main"
description:
@@ -149,10 +149,10 @@ packages:
dependency: transitive
description:
name: hooks
sha256: "7a08a0d684cb3b8fb604b78455d5d352f502b68079f7b80b831c62220ab0a4f6"
sha256: e79ed1e8e1929bc6ecb6ec85f0cb519c887aa5b423705ded0d0f2d9226def388
url: "https://pub.dev"
source: hosted
version: "1.0.1"
version: "1.0.2"
http_multi_server:
dependency: transitive
description:
@@ -197,18 +197,18 @@ packages:
dependency: transitive
description:
name: matcher
sha256: "12956d0ad8390bbcc63ca2e1469c0619946ccb52809807067a7020d57e647aa6"
sha256: dc0b7dc7651697ea4ff3e69ef44b0407ea32c487a39fff6a4004fa585e901861
url: "https://pub.dev"
source: hosted
version: "0.12.18"
version: "0.12.19"
meta:
dependency: transitive
description:
name: meta
sha256: "9f29b9bcc8ee287b1a31e0d01be0eae99a930dbffdaecf04b3f3d82a969f296f"
sha256: df0c643f44ad098eb37988027a8e2b2b5a031fd3977f06bbfd3a76637e8df739
url: "https://pub.dev"
source: hosted
version: "1.18.1"
version: "1.18.2"
mime:
dependency: transitive
description:
@@ -221,10 +221,10 @@ packages:
dependency: transitive
description:
name: native_toolchain_c
sha256: "89e83885ba09da5fdf2cdacc8002a712ca238c28b7f717910b34bcd27b0d03ac"
sha256: "6ba77bb18063eebe9de401f5e6437e95e1438af0a87a3a39084fbd37c90df572"
url: "https://pub.dev"
source: hosted
version: "0.17.4"
version: "0.17.6"
node_preamble:
dependency: transitive
description:
@@ -349,10 +349,10 @@ packages:
dependency: "direct main"
description:
name: sqlite3
sha256: b7cf6b37667f6a921281797d2499ffc60fb878b161058d422064f0ddc78f6aa6
sha256: caa693ad15a587a2b4fde093b728131a1827903872171089dedb16f7665d3a91
url: "https://pub.dev"
source: hosted
version: "3.1.6"
version: "3.2.0"
stack_trace:
dependency: transitive
description:
@@ -397,26 +397,26 @@ packages:
dependency: "direct dev"
description:
name: test
sha256: "54c516bbb7cee2754d327ad4fca637f78abfc3cbcc5ace83b3eda117e42cd71a"
sha256: "8d9ceddbab833f180fbefed08afa76d7c03513dfdba87ffcec2718b02bbcbf20"
url: "https://pub.dev"
source: hosted
version: "1.29.0"
version: "1.31.0"
test_api:
dependency: transitive
description:
name: test_api
sha256: "93167629bfc610f71560ab9312acdda4959de4df6fac7492c89ff0d3886f6636"
sha256: "949a932224383300f01be9221c39180316445ecb8e7547f70a41a35bf421fb9e"
url: "https://pub.dev"
source: hosted
version: "0.7.9"
version: "0.7.11"
test_core:
dependency: transitive
description:
name: test_core
sha256: "394f07d21f0f2255ec9e3989f21e54d3c7dc0e6e9dbce160e5a9c1a6be0e2943"
sha256: "1991d4cfe85d5043241acac92962c3977c8d2f2add1ee73130c7b286417d1d34"
url: "https://pub.dev"
source: hosted
version: "0.6.15"
version: "0.6.17"
typed_data:
dependency: transitive
description:

View File

@@ -9,7 +9,7 @@ environment:
dependencies:
args: ^2.7.0
collection: ^1.19.0
csv: ^7.1.0
csv: ^8.0.0
equatable: ^2.0.0
path: ^1.9.1
sqflite_common: ^2.5.0