With support for more file types there must be a way to extract text from all of them. It is better to extract text from the source files, in contrast to extracting the text from the converted pdf file.
There are multiple options and multiple file types. Again, most priority is to use a java/scala library to reduce external dependencies.
There is only one library I know: Apache
POI. It supports
However, it doesn't support open-document format (odt and ods).
There are two libraries:
Tika: The tika-parsers package contains an opendocument parser for extracting text. But it has a huge dependency tree, since it is a super-package containing a parser for almost every common file type.
ODF Toolkit: This depends on Apache Jena and also pulls in quite some dependencies (while not as much as tika-parser). It is not too bad, since it is a library for manipulating opendocument files. But all I need is to only extract text. I created tests that extracted text from my odt/ods files. It worked at first sight, but running the tests in a loop resulted in strange nullpointer exceptions (it only worked the first run).
Richtext is supported by the jdk (using
For "image" pdf files, tesseract is used. For "text" PDF files, the library Apache PDFBox can be used.
There also is iText with a AGPL license.
For images and "image" PDF files, there is already tesseract in place.
HTML must be converted into a PDF file before text can be extracted.
These files can be used as-is, obviously.