GNverifier release v0.6.2, Advanced Search

GNverifier can help to answer the following questions:

  • Is a name-string a real name?
  • Is it spelled correctly, and if not, what might be the correct spelling?
  • Is the name currently in use?
  • If it is a synonym, what data sources consider to be currently accepted names?
  • Is a name a homonym?
  • What is a taxon that the name points to, and where is it placed in various classifications?

Biannual GNverifier database update is done for 14 datasets, new datasets are pending to be added.

The v0.6.2 version of GNverifier is out. It brings new features and some changes in API, input and output format. The main change is an ability to perform a search by name details, such as an abbreviated genus, author, year.

For example g:M. sp:galloprovincialis au:Oliv. y:-1800 will search for a name with genus starting with M, galloprovincialis as a specific epithet, an author starting with Oliv and a year earlier or equal to 1800.

Both name-verification and search return the same format of results. Because of that we needed to change the name of some fields in the output, so their meaning would correspond to both verification and search outputs.

  • inputId changed to id
  • input changed to name
  • preferredResults changed to results`

GNverifier’s API v1 still can be used (it did not undrgo any format changes), but the web-page and command line app gnverifier moved to use new API v0. When the new API stabilizes, it will be renamed to API v2.

Install GNverifier with Homebrew

brew tap gnames/gn
brew install gnverifier

or

brew upgrade gnverifier

Changes in functionality since v0.3.0

  • web-page shows the date when a name was imported into GNverifier database.
  • to make it easier to cite GNverifier, there is a DOI information on its GitHub page.
  • tab-delimited values (TSV) format is now supported.
  • added AlgaeBase to data-sources.
  • there are now options to return all matched results (use with caution, as the output might be excessively big).
  • show score details in the output and on the web-page.
  • advanced search is added to both command line and web-based user interfaces.

Deprecation of services.

GNI is the oldest version of GN name-verification algorithms. Most of its functionality now exists in GNverifier, so the GNI web-site is going to be removed in the beginning of 2022.

The Scala version of GNI will also be scheduled for removal.

GNresolver will continue to run the longest. It will not be deprecated until GNverifier API v2 will be released.

If you use old systems, consider switching to GNverifier because older systems will eventually be deprecated and stopped.

GNparser (Go language) release 1.5.0

GNparser v1.5.0 is out. The following changes happened since 1.3.3:

v1.5.0

Courtesy of Toby Marsden (@tobymarsden) GNparser in ‘cultivars mode’ is able to parse graft-chymeras. An example: “Cytisus purpureus + Laburnum anagyroides”. Note that cultivar-specific names are not recognized outside of the cultivars mode.

v1.4.2

Add support for authors with prefix ‘ver’. An example: “Cryptopleura farlowiana (J.Agardh) ver Steeg & Jossly”.

v1.4.1

Fixed parsing of multinomials where authorship is not separated by space. An example: “Paeonia daurica coriifolia(Rupr.) D.Y.Hong”.

v1.4.0

Support an output in tab-separated values format. Quite often TSV format is much easier to parse than CSV. Tab character is much less common inside of scientific names than , character. Therefore just splitting by \t breaks a row into its components in many cases. It is still recommended to use CSV libraries for any given language to avoid unexpected problems.

Authors that contain prefixes do and de los are now parsed correctly. An example: “… de Cássia Silva do Nascimento …”

Authors with suffex ter are parsed correctly. An example: “Dematiocladium celtidicola Crous, M.J. Wingf. & Y. Zhang ter”.

Added support for non-ASCII apostrophes in Authors’ names. An example: “Galega officinalis (L.) L`Hèr.”.

New GNparser C-binding package for Node.js by Toby Marsden

Toby Marsden also created GNparser wrapper for Node.js.

Update of C-binding for Ruby-based parser

The new v5.3.4 [biodiversity Ruby gem][biodiversity] is released using C-binding to GNparser v1.4.2

GNparser (Go language) release 1.3.3

GNparser v1.3.3 is out. The following changes happened since 1.3.0:

  • GNparser received a citable DOI (v1.3.1)

  • Names-exceptions that are hard to parse because they use some nomenclautral, or biochemical terms as specific epithets are [now covered]. Some examples:

Navicula bacterium
Xestia cfuscum
Bolivina prion
Bembidion satellites
Acrostichum nudum
Gnathopleustes den
  • 2-letter generic names are appended with 3 more genera (Do, Oo, Du).
Do holotrichius (beetle)
Oo spinosum (arachnid)
Nu aakhu (annelid)
  • Known prefixes in authorships are appended with 3 more prefixes adding support for authors like:
delle Chiaje
dos Santos
ten Broeke
ten Hove
  • Parsing of names with ms in like Crisia eburneodenticulata Smitt ms in Busk, 1875 is supported (normalized to Crisia eburneodenticulata Smitt ex Busk, 1875).

  • More annotation ‘stop’ words are added, fixing parsing for names like:

Crisina excavata (d'Orbigny, 1853) non (d'Orbigny, 1853)
Eulima excellens Verkrüzen fide Paetel, 1887
Porina reussi Meneghini in De Amicis, 1885 vide Neviani (1900)

Many thanks to @diatomsRcool, @KatjaSchulz and @joelnitta for feature requests and bug reports!

Clib libraries are now provided with each new release

GNparser can be incorportated via C-binding into many other languages. To make such incorporation easier, the clib files for MacOS, Linux and MS Windows are now provided with every new release.

macos-latest-clib.zip
ubuntu-latest-clib.zip
windows-latest-clib.zip

Update of C-binding for Ruby-based parser

The new v5.3.3 biodiversity Ruby gem is released using C-binding to GNparser v1.3.3

GNparser for JavaScript

@tobymarsden incorporated GNparser C-binding into a Node.js package. He plans to release the new package for NPM.

Why Go is a great language for biodiversity informatics

We, as a biodiversity informatics community, use quite a few programming languages. Python, Java, R, Ruby are probably the most popular ones. After working for the last few years with a language Go I think it would be very beneficial to add it to the biodiversity informatics toolset.

When I started writing in Go, the language seemed to be a bit clunky, did not have much of cool shiny language constructs that other modern languages have. I felt there were no “cute” syntax gold nuggets that are common in languages like Ruby or Python. However, I think Go is amasing for solving many biodiversity informatics problems.

I tried about 10 different languages for the last 12 years working on Encyclopedia Of Life and Global Names projects, and for the last 5 years Go is the language of choice for the vast majority of projects I make.

Lets look at the key features of the language that influenced my choice.

Feature 1: Amazing code readability

Biodiversity informatics runs mostly in academia. In academia projects blossom and wither depending on availability of funds. Funds often appear and dissapear, students and postdocs come and go, and quite often people have to continue development of projects written by someone else.

Therefore, an ability to understand a code written by another person is crucial for longevity of academia projects. So many important and interesting developments become stale or died out only because their code was too hard to understand by newcomers.

I found that code written in Go is one of the easiest to understand. The simplicity of syntax was one of major design goals in Go, and most of the common programming tasks can be solved only in one way. Go designers decided on a brutally minimalistic approach, as a result Go has very little bloat and in vast majority of cases has no duplication of features between its syntactic constructs.

As a result a programming style of novice and experienced programmers does not differ that much. It is great for supporting projects written by others, and for learning from Go code itself how to solve common problems.

Feature 2: Go is easy to learn

Minimalistic approach to the language design makes it possible to learn Go with 5-20 times less effort than other languages. Specification of the language is tiny and just going through an ‘official’ Go tutorial is enough to become productive after a few hours.

Feature 3: Go is easy to maintain

Developers of the language released Go v1.0 in 2012 and plan to support backward compatibility for many years to come. Starting with Go v1.11 there is also support for versioning of community packages as dependencies. As a result it is possible to write a program or library and use it without any changes inspite of new versions of Go appearing regularly. If a library or a program depends on Open Source packages, the specific version of each package can be set in a go.mod file.

New versions of Go appear regularly, and to use recently added features a developer can provide Go version in go.mod.

Go team is very careful when they consider new features, and such features are implemented only after much thought and discussion. As a result even massive new functionalities do not create backward incompatibilities.

Feature 4: Go is fast

Go is much faster than such languages like Python, Ruby, Perl or R. In addition concurrency and parallel execution of the code is a core concept of Go. Writing concurrent code in Go is orders of magnitude easier than in C, and after some practice becomes a second nature of a developer.

The combination of the language speed and parallel execution of the code on multi-core CPUs allows to make Go programs up to 100 times faster than programs written in interpreted languages. Go is somewhat slower than C, but Go programs are often faster than analogous C programs, because of the ease of developing concurrency and parallelism in Go.

Feature 5: Speed of programming is great

Go is quite expressive, and, in my experience, the speed of developing in Go is comparable with speed of development in Python or Ruby.

Also, running tests in Go is very fast, even big programs can be tested without a long wait. It allows to run tests often, or run them on each save.

Feature 6: Very fast compilation

Go is a compiled language, and requires compilation of its code before execution. The speed of compiling the code is usually almost instantaneous, and the language developers spend a lot of effort to keep it this way. The speed of compiling is only slightly slower than executing Python or Ruby code.

Feature 7: Convenient executable files

As a rule, Go compiles a program into a self-sufficient single executable file. Most of Go programs have no external dependencies and, as a result, are very easy to distribute and install. Downloading one file and running it is all what is required. Go supports cross-compilation. It means that it can create executable files for any supported OS on one computer. For example, a computer running Linux can create executables for MS Windows, Mac OS, and Linux in one go.

The size of the executable files is really tiny. For example, the size of a name-finding project written in Ruby (GNRD) is about a gigabyte, while the analogous code in Go is only 50 megabytes.

Small size of executables and lack of external dependencies make Go fantastic for publishing projects as Docker images, or deplying such images on Kubernetes.

Feature 8: Go is great for remote APIs and web-applications

Writing an extemely fast web-server in Go is a trivial task. Developing web-applications is quite easy with provided template methods. Distributing web-applications is also easy, because all the static files are usually included into a single binary. Besides a traditional REST approach to APIs there is a very fast streaming gRPC approach.

Feature 9: Rich ecosystem of Open Source libraries

Inspite of being a relatively young language (v1.0.0 was released in 2012) Go has a very active community and a large number of libraries for many development needs. Specific libraries for biodiversity informatics are scarse so far, but they start to appear. For example Global Names provides libraries for finding, parsing and verifying of scientific names.

Feature 10: Go code can be used in many other languages via C-bindings

It is possible to compile Go into a C library, and use it via C-binding with many other languages (C, R, Ruby, Python, Java for example).

It is also possible to incorporate C libraries into Go, however most of functionality usually provided by C libraries is already implemented in pure Go and, most of the time, introducing such depencencies is not required. For example there are fantastic Go drivers for most popular databases.

Feature 11: There are very good tools for Go developers

A lot of tools exist for Go that simplify development. Go creators pioneered an idea of auto-formatting of a written code. There are fantastic plugins for Go development in VS code, Vim, Emacs etc. JetBrains releases a standalone Go development platform. Go plugins incorporate linting, formatting, debugging, refactoring tools. Most of these tools can also be used as a stand-alone command line applications. Go has powerful profiling and tracing tools as well.

Conclusion

I hope this post gave you an idea why Go is good for biodiversity informatics, and you can try to start solving problems that appear in your work using Go language. I would suggest to pick a small well-defined task, that requires fast execution, read the tour of Go tutotiral, install Go and its tools for your favorite editor and start hacking the code!

GNparser (Go language) release 1.3.0

GNparser v1.3.0 is out. The major new functionality is an ability to recognize and parse botanical cultivar names. This ability was added to GNparser by Toby Marsden, thanks Toby for a great patch!

In addition to ICN nomenclatural code for botanical scientific names there is an ICNCP nomenclatural code for cultivated plants. ICNCP supports names like:

Dahlia ‘Doris Day’
Fragaria 'Cambridge Favourite'
Rosa multiflora cv. 'Crimson Rambler'

Now, if these names are parsed as cultivars, the cultivar epithet is included into a canonical form like Rosa multiflora ‘Crimson Rambler’. However such addition would create problems for users who are more interested in the canonical form according to ICN: Rosa multiflora. Therefore, by default GNparser will process such names according to ICN code, providing a warning:

{
  "quality": 2,
  "warning": "Cultivar epithet"
}

If a user does need to treat such names as cultivars, there is a flag in the command line app: gnparser "Rosa multiflora cv. 'Crimson Rambler'" -C. When parsed with this flag, warning will disappear and canonical forms will include cultivar information. GNparser web-interface has a “cultivar checkbox” now, as well as there is a “cultivar” option in GNparser RESTful API.

Parsed detailed data for cultivars:

gnparser "Rosa multiflora cv. 'Crimson Rambler'" -C -d -f pretty
{
  "parsed": true,
  "quality": 1,
  "verbatim": "Rosa multiflora cv. 'Crimson Rambler'",
  "normalized": "Rosa multiflora ‘Crimson Rambler’",
  "canonical": {
    "stemmed": "Rosa multiflor ‘Crimson Rambler’",
    "simple": "Rosa multiflora ‘Crimson Rambler’",
    "full": "Rosa multiflora ‘Crimson Rambler’"
  },
  "cardinality": 3,
  "details": {
    "species": {
      "genus": "Rosa",
      "species": "multiflora",
      "cultivar": "‘Crimson Rambler’"
    }
  },
  "words": [
    {
      "verbatim": "Rosa",
      "normalized": "Rosa",
      "wordType": "GENUS",
      "start": 0,
      "end": 4
    },
    {
      "verbatim": "multiflora",
      "normalized": "multiflora",
      "wordType": "SPECIES",
      "start": 5,
      "end": 15
    },
    {
      "verbatim": "Crimson Rambler",
      "normalized": "‘Crimson Rambler’",
      "wordType": "CULTIVAR",
      "start": 21,
      "end": 36
    }
  ],
  "id": "38ff69c4-7e1a-5a26-bfc4-ee641fed6ba7",
  "parserVersion": "nightly"
}

In addition Toby found and helped to fix problems with stemming of hybrid formulas and with providing correct output for hybrid signs in the “details:words” section. Again, thanks for this contribution Toby Marsden!

You can grab GNparser v1.3.0 binaries and follow installation instructions, or use Homebrew to install it on operating systems that support it:

brew tap gnames/gn
brew install gnparser

GNfinder release v0.14.1

Find scientific names in plain texts, PDF files, MS documents etc.

GNfinder is a program for finding scientific names in texts. GNfinder exists for several years now and is responsible for creation of name indices for Biodiversity Heritage Library (BHL) and HathiTrust Digital Library. The program is fast enough to process 5 million pages of BHL in just a couple of hours.

The new GNfinder v0.14.1 can find names not only in plain UTF8-encoded texts, but also in a large variety of files including PDF, MS Word, MS Excel, and images. In this blog post we describe how it can be used.

GNfinder code follows Semantic Versioning practices. So users need to be aware that for versions 0.x.x backward incompatible changes might happen.

Summary

GNfinder is a command line application, that can also be used as a RESTful service. In the near future it will also have a web-based user interface and will run at https://finder.globalnames.org

The program uses heuristic and Natural Language Processing (NLP) algorithms for name finding.

Performance

For a test we used 4MB PDF file that contains ~2000 unique names. These names are mentioned in text ~13000 times.

Time for conversion to UTF8-encoded plain text: 2.5 sec

Time for name-finding: 0.4 sec

Time for name-verification of 2000 uniquely found names: 2.5 sec

Installation

The program consists of one stand-alone file, so it is easy to install. The binaries for MS WIndows, Mac OS or Linux can be downloaded from GitHub. In addition GNfinder can be installed using a Homebrew package manager with the following terminal commands:

brew tap gnames/gn
brew install gnfinder

For more detailed installation instructions see the documentation on GitHub.

Usages

GNfinder is a command line application. It requires an internet connection for converting files to UTF8-encoded text and for name-verification.

To get help use:

gnfinder -h

To get names from a UTF8-encoded text file (with -U flag no internet connection is required):

gnfinder -U file-with-names.txt

To get names from any other kind of file:

gnfinder file.pdf

To find names, to verify them and to output results in JSON format:

gnfinder file.pdf -v -f pretty > file-names.csv

To convert PDF file into text:

gnfinder -I file.pdf > file.txt

Tutorial

I wrote a tutorial how to exctract scientific names in parallel from a large number of PDF files.

For more information read the documentation on GitHub.

RESTful (API)

You can run GNfinder as a RESTful API service as well. For now API can work only with UTF-8 encoded texts, but other file formats will be available via API as well after completion of a web-based user interface.

GNverifier release v0.3.0

Very fast scientific name checker is out.

There are millions of checklists in use by scientists and nature enthusiasts. Very often, such lists contain misspellings or outdated names. To help researchers clean up their checklists and monitor their quality, we are releasing GNverifier v0.3.0 written in Go language.

GNverivier code follows Semantic Versioning practices. So users need to be aware that for versiong 0.x.x backward incompatible changes might happen.

We released several implementations of name-verification (reconciliation/resolution) before. All of them did not have enough speed for verifying massive lists of scientific names. This release provides from 10x to 100x throughput improvements compared to older implementations.

Summary

GNverifier can help to answer the following questions:

  • Is a name-string a real name?
  • Is it spelled correctly, and if not, what might be the correct spelling?
  • Is the name currently in use?
  • If it is a synonym, what data sources consider to be currently accepted names?
  • Is a name a homonym?
  • What is a taxon that the name points to, and where is it placed in various classifications?

Name verification and reconciliation involves several steps.

  • Exact match: input name matches canonical form located in one or more data-sources.
  • Fuzzy match: if no exact match is found, there is a fuzzy matching of canonical forms
  • Partial Exact match: if the previous two steps failed, we remove words from the end or from the middle of a name and try to match what is left until we end up with a bare genus.
  • Partial Fuzzy match: in case if the partial exact match did not work, and the remained name is not uninomial, we apply fuzzy matching algorithms.

A scoring algorithm then sorts matched results. The “About” page contains more detailed information about matching and scoring.

Performance

We observe speeds of ~2,500 names per second for checklists that are coming from optical character recognition process and contain many misspellings.

Usages

The simplest way to use GNverifier is via web-interface. The online application emits results in HTML, CSV and JSON formats and can process up to 5000 names per request.

For larger datasets, and as an alternative, there is a command line application that can be downloaded for Windows, Mac, and Linux.

gnverifier file-with-names.txt

This version adds an option -c or --capitalize to fix name-strings’ capitalization before verification. It is beneficial for web-interface, as it allows users “to be lazy” when they try to match names.

$ gnverifier "drsophila melanogaster" -c -f pretty
INFO[0000] Using config file: /home/dimus/.config/gnverifier.yaml.
{
  "inputId": "b20a7c40-f593-5a68-a048-0a24742b4283",
  "input": "drsophila melanogaster",
  "inputCapitalized": true,
  "matchType": "Fuzzy",
  "bestResult": {
    "dataSourceId": 1,
    "dataSourceTitleShort": "Catalogue of Life",
    "curation": "Curated",
    "recordId": "2586298",
    "localId": "69bbaee49e7c2f749ee7712f3f168920",
    "outlink": "http://www.catalogueoflife.org/annual-checklist/2019/details/species/id/69bbaee49e7c2f749ee7712f3f168920",
    "entryDate": "2020-06-15",
    "matchedName": "Drosophila melanogaster Meigen, 1830",
    "matchedCardinality": 2,
    "matchedCanonicalSimple": "Drosophila melanogaster",
    "matchedCanonicalFull": "Drosophila melanogaster",
    "currentRecordId": "2586298",
    "currentName": "Drosophila melanogaster Meigen, 1830",
    "currentCardinality": 2,
    "currentCanonicalSimple": "Drosophila melanogaster",
    "currentCanonicalFull": "Drosophila melanogaster",
    "isSynonym": false,
    "classificationPath": "Animalia|Arthropoda|Insecta|Diptera|Drosophilidae|Drosophila|Drosophila melanogaster",
    "classificationRanks": "kingdom|phylum|class|order|family|genus|species",
    "classificationIds": "3939792|3940206|3940214|3946159|3946225|4031785|2586298",
    "editDistance": 1,
    "stemEditDistance": 1,
    "matchType": "Fuzzy"
  },
  "dataSourcesNum": 28,
  "curation": "Curated"
}

It is possible to map a checklist to one of 100+ data sources aggregated in GNverifier.

The following command will match all names from file-with-names.txt against the Catalogue of Life.

gnverifier file-with-names.txt -s 1 -o -f pretty

It is also possible to run web-interface locally by running:

gnverifier -p 4000

After running the command above, the interface can be accessed by a browser via https://globalnames.org URL.

One can find a complete list of gnverifier by running:

gnverifier -h

Application Programming Interface (API)

GNverifier does not keep all the data needed for processing name-strings locally. It uses a remote API located at https://verifier.globalnames.org/api/v1.

The RESTful API is public. It has an OpenAPI description and is available for external scripts.

Deprecation of old systems

There are several older approaches to solve the same problem:

If you use any of these, consider switching to GNverifier because older systems will eventually be deprecated and stopped.

GNparser (Go language) release 1.2.0

Version 1.2.0 of GNparser is out. It adds an option to parse low-case names in case if a checklist does not follow nomenclatural standards.

$ gnparser "plantago major" --capitalize
Id,Verbatim,Cardinality,CanonicalStem,CanonicalSimple,CanonicalFull,Authorship,Year,Quality
085e38af-e19b-56e5-9fec-5d81a467a656,plantago major,2,Plantago maior,Plantago major,Plantago major,,,4

Capitalization does not apply if for named hybrids

$ gnparser "xAus bus" -c
Id,Verbatim,Cardinality,CanonicalStem,CanonicalSimple,CanonicalFull,Authorship,Year,Quality
9b24b828-88a6-58b7-ac76-1342c8ac135d,xAus bus,2,Aus bus,Aus bus,× Aus bus,,,3

GNparser assigns Quality=4 (the worst) and issues a warning.

$ gnparser "plantago major" -c -f pretty
{
  "parsed": true,
  "quality": 4,
  "qualityWarnings": [
    {
      "quality": 4,
      "warning": "Name starts with low-case character"
    }
  ],
  "verbatim": "plantago major",
  "normalized": "Plantago major",
  "canonical": {
    "stemmed": "Plantago maior",
    "simple": "Plantago major",
    "full": "Plantago major"
  },
  "cardinality": 2,
  "id": "085e38af-e19b-56e5-9fec-5d81a467a656",
  "parserVersion": "nightly"
}

GNparser (Go language) release 1.1.0

Scientific name parsing allows to determine a canonical form, the authorship of a name, and receive other meta-information. Canonical forms are crucial for comparing names from different data sources.

We are releasing GNparser v1.1.0 written in Go language. We support Semantic Versioning, therefore it is a stable version. Output format, functions, and settings are going to be backward compatible for many years (until v2).

This is the 3rd implementation of name-parsing for Global Names Architecture project. First one, written in Ruby, biodiversity gem, uses now the Go code of GNparser. Second one, written in Scala is archived, and awaits for a new maintainer.

Summary

GNparser is a sophisticated software, it is able to parse the most complex scientific names. It is also very fast, and able to parse more than 200 million names in an hour. The parser is a core component of many other Global Names Architecture projects.

It can be used via:

We also provide C-binding to its code. This approach allows to incorporate GNparser natively into all languages that support C-binding (such as Java, Python, Ruby etc)

Improvements since the last Scala-based release of GNparser

  • Speed — about 2 times faster than Scala-based version for CSV output, and about 8 times faster for JSON output.

  • Issue #27 — support for agamosp. agamossp. agamovar. ranks.
  • Issue #28 — support for non-ASCII apostrophes.
  • Issue #36 — support _ as a space for files in Newick format.
  • Issue #40 — support names where one of parentheses is missing.
  • Issue #43 — support for notho- (hybrid) ranks.
  • Issue #45 — support for natio rank.
  • Issue #46 — support for subg. rank.
  • Issue #48 — improve transliteration of diactritical characters.
  • Issue #49 — support for outdated names with several hyphens in specific epithet.
  • Issue #51 — distinguish between Aus (Bus) cus in botany and zoology (author or subgenus).
  • Issue #52 — support hyphen in outdated genus names.
  • Issue #57 — warn when f. might mean either filius or forma.
  • Issue #58 — distinguish between Aus (Bus) in ICN and ICZN (author or subgenus).
  • Issue #63 — normalize format to f. instead of fm..
  • Issue #60 — allow outdated ranks in form of Greek letters.
  • Issue #61 — support authors’ names with bis suffix.
  • Issue #66 — remove HTML tags from names, unless asked otherwise.
  • Issue #67 — add name’s authorship to the “root” of JSON structure.
  • Issue #68 — provide stemmed canonical form.
  • Issue #69 — provide shared C library to bind GNparser to other languages.
  • Issue #72 — parse surrogate names from BOLD project.
  • Issue #75 — normalize subspecies to subsp.
  • Issue #74 — support CSV output.
  • Issue #78 — parse virus-like non-virus names correctly.
  • Issue #79 — make CSV as a default output.
  • Issue #80 — add cardinality to output.
  • Issue #81 — support year ranges like ‘1778/79’.
  • Issue #82 — parse authors with prefix zu.
  • Issue #89 — allow subspec. as a rank.
  • Issue #90 — allow ß in names.
  • Issue #93 — parse y from Spanish papers as an author separator.
  • Issue #127 — release a stable 1.0.0 version.
  • Issue #162 — support bacterial Candidatus names.

gnfinder release 0.9.1 -- bug fixes

We are releasing a bug-fixing version (v0.9.1) of gnfinder, a written in Go project that provides ability to search for scientific names in plain UTF-8 encoded texts. It now has a better version and build timestamp report, its black list dictionary expanded by 5 more words, and a bug fixed that prevents client programs to break if name-verification returned an error instead of result. We also started experimenting with making gnfinder available for other languages through a C-shared library.

More can be found in gnfinder’s CHANGELOG file.

The gnfinder code is used to create scientific name indices in Biodiversity Heritage Library, HathiTrust Digital Library, and serves as an engine for GNRD

GNRD release (v0.9.0) switched to gnfinder from TaxonFinder/NetiNeti and became 25 times faster.

Big change came to GNRD, a program David Shorthouse and Dmitry Mozzherin released back in 2012. GNRD is a web application that is able to find scientific names in UTF-8 encoded plain texts, in PDFs, MS Word and MS Excel documents, and even images.

For a long time it used two name-finding libraries TaxonFinder (developed by Patrick Leary) and NetiNeti (developed by Lakshmi Manohar Akella). Both projects served us well all these years using complementary heuristic and natural language processing algorithms. Biodiversity Heritage Library, BioStor and many others used GNRD for detection of scientific names for many eyars with success. However the speed for large-scale name-finding was not satisfactory. To make large-scale name-detection possible we developed gnfinder that also uses both heuristic and NLP algorithms. With this new release of GNRD we substitute TasonFinder and Netineti engines with gnfinder.

We tried hard to keep API as close as possible to how it was before, however there are a few changes, especially at the name-verification (reconciliation and resolution) part. This change made both name-finding and name-verification much faster with increased quality. For example, it used to take 15 seconds to find names in a 1000-page biological book. Now it takes only 0.5 seconds. GNRD tries to get names with OCR errors as well, as a result you might get false positives. We do recommend to use name-verification option to weed out such false results.

If you need to cite GNRD in a paper, v0.9.0 has a DOI attached: 10.5281/zenodo.3569619

Outcomes draft (Final Report for NSF award 1645959)

We developed an ability to discover scientific names in millions of books in a matter of hours. We are now able to finish name-finding for Biodiversity Heritage Library (200 000 volumes) in 12 hours on a single laptop, and HathiTrust Digital Library (18 million volumes, estimated 10% of all published books) in 9 hours, using 50 servers. Scientific name-strings are often represented differently by different sources. For example Homo sapiens can be written as Homo sapiens Linnaeus, 1758, Homo sapiens sapiens, Homo sapiens Linn. etc. To understand that all these variations talk about the same species they need to be normalized. We built a tool that normalizes 300 million name-strings per hour on a single computer. Also we created a service that is able to verify scientific names in all existing variations. The service validates names against well-known curated data sources with a rate of 800,000 per hour. As a result of all these new developments we have an opportunity to create a truly global index of scientific names and mine biological information using scientific names as anchors.

Modern biology depends heavily on scientific names. They were introduced in 18th century by Carl Linnaeus. For more than a quarter of a millennium they are the basis of sharing information about biological taxa. Scientific names are heavily used by researches, physicians, students, governmental agencies, citizen scientists etc. Scientific names create a common basis for biology and give us an ability to connect millions of medical and biological facts together into a graph of knowledge.

However scientific names change quite often. According to our observations there are about 3 different scientific names per species on average. On top of that, scientific names usually are not rendered exactly the same way in different publications, as we showed above, and such lexical variants of names have to be reconciled. To collect a complete history of studies about species we have to be able to take in account of these differences between name variations. To provide an access to globally accumulated biodiversity knowledge we have to scientific names in literature published in the last 250 years.

Our group have been participating in Global Names Architecture project since 2007 and tried to find solutions for these three major questions:

  • How to find scientific names in texts?
  • How to “normalize” lexical and spelling inconsistencies in names belonging to the same lexical group?
  • How to connect a given name to all names ever given to its taxon, and find which one of these names is currently accepted as valid?

Former ABI innovation grant from NSF from 2011 (#1062387) gave us an opportunity to develop functional prototypes of programs that were answering these questions. However, these prototypes had been too slow to be applied to whole corpus of biological and medical human knowledge.

We understood that for such programs to be useful on a global scale they must be dramatically improved in speed, quality and scalability. They must be able to use all resources of a multi-processor computer, and scale to multiple machines with ease. Current ABI Development funding allowed us to achieve speed and scalability goals.

We developed 3 major software tools:

Parser of Scientific Names: gnparser

This tool allows to dissect vast majority of scientific names into their semantic elements. For example it is able to find out that “Carex scirpoidea ssp. convoluta (Kük.) Dunlop” and “Carex scirpoidea Michaux var. convoluta Kükenthal” both have the same conservative element “Cares scirpoidea convoluta”, but vary in rank (variety vs subspecies) “Carex scirpoidea var. convoluta” and “Carex scirpoidea subsp. convoluta”. Such matching is very important for aggregation of biodiversity data from a variety of sources. The gnparser program is designed to be small, easy to install and be blazingly fast. It is currently used by several prominent biodiversity aggregators, such as Encyclopedia of Life and Catalogue of Life, as well as Global Names Index. It is able to process 300,000,000 names in an hour on one computer.

Global Names Index: gnindex

The gnindex service aggregates scientific names from a large number of biodiversity data sources and uses them to verify incoming names. A user is able to send a list of name-strings and find out if they are known names, if they contain spelling mistakes, if they are currently accepted as valid names, etc. Such service is important for many use cases, for example as a verification tool for species lists collected by citizen scientists, as a curation tool for museum collections, for mapping country species lists of government agencies to standardized data set, such as Catalogue of Life etc. This tool relies heavily on gnparser to align data from many sources. It is also indispensable for verifying (reconciling/resolving) name candidates found in texts by our algorithms. This service currently has a throughput of 800,000 names per hour.

Global Names Finder: gnfinder

The gnfinder tool uses heuristic and natural language processing algorithms to detect scientific names in texts. In machine learning terms it belongs to named entity recognition class of tools. It marks parts of texts that potentially can be scientific names, and then sends them for remote verification to gnindex. Average book in digital libraries is about 300 pages long. On a single machine we are able to reach a rate of name-detection to 45,000 books an hour.

To summarize, these new tools give us ability to create indexes of scientific names in a matter of hours for gigantic collections of literature and make a global index for all digitized literature possible.

Biodiversity Heritage Library (BHL) link to Catalogue of Life Plus (CoL+)

Peter Schalk and Olaf Bánki (GBIF and CoL+) came to our group today. We had a short meeting to start connecting names from Catalogue of Life to original nomenclatural events collected in Biodiversity Heritage Library. We had a remote connection to folks from BHL (Joel Richard and Mike Lichtenberg), and CoL+ (Dave Remsen, Markus Doering).

Markus said that he will create a list of references for proto/basionyms in CoL+, and starting December I will start trying to connect the names from BHL to these references. If things work well, it will enhance usability of Col/Col+ and BHL as well.

Markus created today a gitter account that we can use for this project.

Olaf will set a weekly meetings through December-January to keep us all in sync for this project. Will be interesting to learn from IPNI about their approach of doing pretty much the same task.

We also had a good conversation with Peter, Ed Devalt and Olaf about importance of different biodiversity projects working together instead of trying to re-implement each other ‘wheels’. I would say there is definitely a progress in cooperation between different initiatives and it is good news for biodiversity informatics.

gnparser release (v0.12.0) can be used in most modern languages

A few days ago we released v0.12.0 of gnparser (Go version). This version made it possible to compile gnparser algorithms into C-compatible library. Such library makes it possible to use gnparser with its native speeds in any language that supports binding to C. Such languages inlude Python, Ruby, Java (via JNI), Rust, C, C++ and many others.

We already updated Ruby’s biodiversity parser gem to take benefit of a dramatic speed increase and parsing quality of gnparser.

Here are a quick benchmarks that compare how biodiversity performs before and now:

Program Version Full/Simple Names/min
gnparser 0.12.0 Simple 3,000,000
biodiversity 4.0.1 Simple 2,000,000
biodiversity 4.0.1 Full JSON 800,000
biodiversity 3.5.1 n/a 40,000

With this improved speed Encyclopedia of Life, which is written in Ruby, can process all their names using Ruby in less than 15 minutes.

README file of gnparser contains instructions how to make such c-shared library and biodiversity code is a good example of connecting the library to other languages.

Biodiversity-Next Conference

For the last couple of years I have been working on making scientific-name detection possible on a massive scale. The result of this work was creation of several tools that increased the speed of name-finding dramatically:

As a result we are able to scan large corpora of biodiversity literature, such as Biodhiversity Heritage Library — BHL (200 000 volumes) and HathiTrust Digital Library (16 000 000 volumes) in a matter of hours.

Recently, we presented our achievements at Biodiversity Next conference, an expanded version of traditional yearly TDWG meeting. I had a talk at a symposium Improving access to hidden scientific data in the Biodiversity Heritage Library. You can read about it in more details at a BHL blog post.

Such a significant collection of biodiversity literature as BHL gains dramatically in usability from data mining efforts. For many years they use the scientific names index that had been generated by our project: Global Names Architecture. Several years ago generation of such index was a very slow and laboreous task, that could not be repeated easily. With our recent developments we are able to index BHL repeatedly with ease. It gives us an opportunity to listen for feedback from BHL users, make incremental improvements in our algorithms and increase the quality of scientific names index continuously.

We are very interested to work with other people who try to enhance usability of literature aggregators like BHL by developing natural language processing and machine learning algorithms. During the conference several researches in the field agreed to participate in a brain-storming workshop at Illinois Natural History Survay, Champaign/Urbana to develop new technologies for data-mining in BHL and other similar corpora. We are planning to organize such a workshop in April/May 2020.

Power down

On May 20th all our servers are going to be down because of a power maintenance at our computer center. Sorry for inconvenience! We are going to be back as soon as possible.

Names in November meeting

Global Names developers Dmitry Mozzherin and Richard Pyle were invited to attend a workshop called “Names in November”, organized by the Catalog of Life and GBIF and hosted in Leiden. The three-day meeting involved more than twenty people from key taxonomic and nomenclatural organizations, and focused on discussing ways that a global information system of taxonomy, including both names data and accepted species information, could be designed to more seamlessly interconnect biodiversity data through organism names. Although the theme of the meeting was certainly not new (many participants in this meeting had attended similar meetings going back decades discussing essentially the same idea), the tone of the discussion was refreshing in that it focused comparatively little on politics and technical details, and instead concentrated on identifying whether such a shared taxonomic infrastructure was even possible (given the political, financial, and technical circumstances currently existing within the main likely partners), and what conditions would need to be met.

Many of the points that participants agreed on in terms of needs and services very closely matched the fundamental goals and infrastructure we have developed (and continue to develop) within the context of Global Names. Now that GN is much more closely coordinating with the Catalog of Life, GN data indexes and services will likely play an important role in implementing the shared global taxonomy resource envisioned during the meeting. Following this meeting, we have a renewed sense of focus within GN development to finish harmonizing integration of GNI and GNUB services, and especially to rapidly increase the effort to bulk-populate GNUB from existing data.

gnparser release v0.3.3

We are happy to announce the release of gnparser. Changes in the v0.3.3 release:

  • add optionally showing canonical name UUID
scala> fromString("Homo sapiens").render(compact=false, showCanonicalUuid=true)
res0: String = 
// ...
  "canonical_name" : {
    "id" : "16f235a0-e4a3-529c-9b83-bd15fe722110",
    "value" : "Homo sapiens"
  },
// ...
  • add year’s range to AST node that is encoded with Year’s field rangeEnd: Option[CapturePosition]

  • parse names ending on hybrid sign (#88)

  • support hybrid abbreviation expansions (#310)

  • support raw hybrid formula parsing (#311)

  • continuous build is moved to CircleCI

  • and many structural changes, bug-fixes and quality improvements. They are described in the release documenation.

Scala-based gnparser v.0.3.0

To avoid confusion – gnparser is a new project, different from the formerly released biodiversity parser.

We are happy to announce the second public release of Scala-based Global Names Parser or gnparser. There are many significant changes in the v. 0.3.0 release.

  • Speed improvements. Parser is about 50% faster than already quite fast 0.2.0 version. We were able to parse 30 million names per CPU per hour with this release.

  • Compatibility with Scala 2.10.6: it was important for us to make the parser backward compatible with this older version of Scala, because we wanted to support Spark project.

  • Compatibility with Spark v. 1.6.1. Now the parser can be used in BigData projects running on Spark and massively parallelize parsing process using Spark platform. We added documenation describing how to use the parser with either Scala or Python natively on Spark.

  • Simplified parsing output in addition to “Standard output”: It analyzes the name-strings and returns its id, canonical form, canonical form with infraspecific ranks, authorship and a year.

  • Improved and stabilized JSON fields. You can find complete description of the parser JSON output in its JSON schema. We based names of fields on TDWG’s Taxon Concept Schema, and we indend to keep JSON format stable from now on.

  • There were many structural changes, bug-fixes and quality improvements. They are described in the release documenation.

uBio and Nomenclator Zoologicus are back

uBio and Nomenclator Zoologicus online experienced difficulties this year and had been down a lot lately, mostly because there is no system administrator to look for them at Marine Biological Laboratory anymore.

I moved uBio from old hardware at Marine Biological Laboratory to Google Container Engine, and it is running again. Some functionality is not back yet, mostly due to some hard-coded configuration parameters in files. I hope problems with the code will be fixed eventually by interested parties (I do not plan to rewrite the code). I’ll coordinate my efforts with Dave Remsen and Patrick Leary, and hopefully together we will preserve uBio for the community.

Please note that being a system administrator for uBio is not part of my job. I like the project, I consider it to be a ‘precursor’ of GN, I will try my best to keep it running on my spare time. MBL/WHOI library pays for the cloud.

Docker containers to run uBio are located at dockerhub. We use Docker and Kubernetes at Google Container Engine to keep it alive.

Scala-based gnparser v.0.2.0

(Please note that gnparser is a new project, different from the formerly released biodiversity parser.)

We are happy to announce a public release of Global Names Parser or gnparser – the first project that marks transition of Global Names reconciliation and resolution services from “prototype” to “production”. The gnparser project is developed by @alexander-myltsev and @dimus in Scala language. GNParser can be used as a library, a command line tool, a socket server, a web-program and RESTful-service. It is easiest to try it at parser.globalnames.org

Scientific names might be expressed by quite different name strings. Sometimes the difference is just one comma, sometimes authors are included or excluded, sometimes ranks are omitted. With all this variability “in the wild” we need to figure out how to group all these different spelling variants. Name parsing is an unexpectedly complex and absolutely necessary step for connecting biological information via scientific names.

In 2008 Global Names released Biodiversity Gem – a scientific name parser written in Ruby for these purposes. The library in its 3 variants enjoyed a significant success – about 150 000 downloads and a notion as the most popular bio-library for Ruby language. It allowed to parse about 2-3 million names an hour, and had been the basis of name reconciliation for many projects from the moment of its publication.

GNParser is a direct descendant of the biodiversity gem. It serves the same purpose and input/output format of both projects are similar. It also marks eventual discontinuation of ‘biodiversity gem project’ and migration of all Global Names code to the new gnparser library.

Why did we go through the pain of making a completely new parser from scratch? The short answer is scalability and portability. We want to be able to remove parsing step from being a bottleneck for any number of names thrown at resolution services. For example finding all names in Biodiversity Heritage Library took us 43 days 3 years ago. Parsing step alone took more than 1 day. If we want to improve algorithms of finding names in BHL – we cannot wait 40 days. We want to be able to do it within one day and improve whole BHL index every time our algorithms are enhanced significantly.

We have an ambitious goal of making time spent on sending names to resolution services over internet and then time spent on transferring the answers back to be the bottlenecks of our name matching services. For such speeds we need a very fast parsing. Scala allows us to dramatically improve speed and scalability of the parsing step.

Having a parser running in Java Virtual Machine environment allows us to give biodiversity community a much more portable parsing tool. Out of the box parser library will work with Scala, Java, R, Jython and JRuby directly. We hope that it will speedup and simplify many biodiversity projects.

This is the first public release of the library. Please download it, run it, test it and give us your feedback, so we can improve it further. Happy parsing :smile:

WARNING: JSON output format might change slightly or dramatically, as we are in the process of refining it. The JSON format should be finalized for version 0.3.0

Writing Papers as Open Source -- Solutions

I decided to figure out how can I write scientific papers in truly Open Source fashion. And here are practical decisions that allowed me to do it:

Criteria

  • Paper draft is under true revision control system
  • Open access from the very beginning
  • Open tools/standards

Solutions

Revision control

To use full power of revision control system a project should be mostly in text format of some sort. We currently keep practically all our code on GitHub, so Git was a natural choice.

Open tools/formats

I decided to go with LaTeX, as it is tried and powerful markup, very well suited for scientific writing. It allows to work with plain text, so we can easily keep revisions in Git.

Vim is my editor of choice, but nothing prevents me or co-authors to use any other modern text editor for LaTeX.

Open access

With LaTeX and Git it is easy to provide early access to the work in progress, especially with 2 commercial products that give free access for open projects – GitHub and OverLeaf. Overleaf supports git, although not as well as GitHub does. So currently is it better to have GitHub as the main repository, and keep Overleaf as a glorified viewer and use it only as a secondary repository. Another useful tool is Mendeley for finding and organazing bibliography.

Final Result

I am still learning my ropes, but excited about the progress. And the paper about Global Names Parser is now on GitHub and OverLeaf! Overleaf allows anybody interested to see the paper in user-friendly PDF format. It also simplifies submission of papers to a large variety of open access journals.

I created a post on my personal blog describing how did I set up my system with LaTeX, vim and tmux.

Writing Papers as Open Source

Does a culture exist out there which considers a process of writing scientific papers to be akin to writing an open source code?

For the last 8 years I had been blessed being paid for doing open source development. It means that for that long about everything I do is almost instantly available publicly. This model fits my way of thinking, my values, and I see an advantage in making all I do available for public to see, comment, and enhance.

Now I am writing a paper, and I feel being thrown into “dark ages”. Whole paper writing and publishing culture was one of the reasons I left molecular biology and went to programming. I assume the following is usually true when people write a scientific paper –

  • People normally do not share publicly what they work on until it is published.
  • People normally use proprietary software to write papers
  • People often loose their copyright or ability to share their work when their paper is accepted by a journal

Obviously there is a progress with the last point, but what about other 2?

  • Can I use public revision control system when I am writing a paper?
  • Can I publish using a revision control system from the very first paragraph for all to see?
  • What open standards/tools can I use (LaTeX, or even markdown?) for writing a paper
  • Can I consider publishing paper to be a ‘release’, like for a program?
  • Should electronic version be frozen? Can it evolve after publication?

Honolulu Workshop

Last week (October 5-9 2015) @deepreef, @dimus, @alexander-myltsev had a workshop in Honolulu at Bishop Museum to sync ideas, learn more about each other work, and design new generation of services. The meeting had been productive and I think in the end our two GN groups get integrated. We are moving all our code under one roof at GitHub now.

We had an interesting meeting with @jar398 from Open Tree of Life trying to figure out how can we connect OTOL with all the other resources on the web, and @deepreef suggested to use his BioGuid project for these purposes. We moved BioGuid to github and added all the GitHubbish bells and whistles like a blog and [gitter][bioguid-gitter] for example to the project. I think it is pretty cool that we will have a downloadable csv file from all the IDs @deepreef collected that can be used by all other projects in new exiting ways.

Another interesting conversation was with Phylotastic project. We worked on an idea of making an application which will allow convert pictures of scientific names people take at museums or from pages of research papers into texts, extract names appeared there and build phylo-trees from these names using Open Tree. Also the app would show pictures from Encyclopedia of Life and pages from Wikipedia. Such app will mash up interfaces of Global Names to find and reconcile names, Open Tree to build trees, and EOL to get information about species.

BioGUID Wins Award!

As we annouced previously, BioGUID.org has been incorporated into the Global Names suite of indexes and services. Within two days of this happening, we recieved some wonderful news: BioGUID.org won second place in the GBIF Ebbe Nielsen Challenge! We’re very excited about this recognition, and it re-inforces our decision to incorporate BioGUID into the GNA system. You can follow continuing developments on the new BioGUID Blog.

BioGUID merges with GN

BioGUID.org, an indexing service that cross-links identifiers assigned to data objects in the biodiversity information universe, has now been incorporated into the Global Names suite of indexes and services. BioGUID.org represents the third major data component of Global Names (along side GNI and GNUB), and replaces a less robust identifier linking service that had previously been included within GNUB. In addition to the crucial role of cross-linking identifiers within the general GN architecture itself, the broader function of BioGUID.org falls within the scope of Global Names in the sense that identifiers can be thought of as names, and names play the same functional role as identifiers.

We are currently in the process of porting BioGUID into the GN GitHub, and you can follow developments on the new BioGUID Blog.

GNUB proposal submitted to NSF

Bishop Museum, in partnership with the Catalogue of Life, iDigBio, GBIF, WoRMS, PLAZI, BHL, the International Congresses of Dipterology, and Pensoft Publishers, submitted a proposal to the U.S. National Science Foundation’s Advances in Biological Informatics (ABI) program, to develop the Global Names Usage Bank. This proposed project will dramatically improve the core infrastructure behind GNUB in particular, and globalnames in general.

Global Names Gains Stable Funding

Moving Global Names to a stable ground… On October 1st I did sign a job offer from Species File Group and going to start at the new position on November 16th. This position is supported by a fund created by David Eades (thank you David) and allows to think of grants not like a vehicle for GN’s survival, but means of further enhancement of the system. I am honored and touched that David, and Species File Group created this amazing opportunity for Global Names. This move should also lead to a tight integration between Catalogue of Life and Global Names, which in my view is a win/win situation for both projects and for biodiversity.

Our next steps are releasing JVM-based scientific name parser, building scalable, reliable and fast name resolution service and integratation of name resolution with the key Global Names project developed by Rich Pyle and Rob WhittonGlobal Names Usage Bank for which Rich just submitted a grant proposal.

Crossmapping Names by IDs

Yesterday @hyanwong and @jhpoelen mentioned on eol and global names gitter chats that it would be great to be able to crossmap names from different sources by IDs.

What would be a use for such a crossmap? It would allow to quickly mashup data from various projects in interesting ways. For example to show images of species from EOL API on phylo-trees generated using Open Tree of Life API.

TreeOfLife

OpenTree Taxonomy has mapping of OTT IDs to NCBI IDs. Encyclopdia of Life also has mapping of NCBI IDs to EOL IDs. So if someone wants to map NCBI names to EOL using the same algorithm as EOL used – they only need to query data about IDs. Even better – it would create a very fast connection of one aggregator (Open Tree), to another aggregator (EOL) through IDs of other sources without doing explicit name resolution.

Such queries would be much faster, as they would be just about comparing indexed columns in a table. However the quality of the results from such approach would depend on the quality of name resolution used by the aggregators.

I am thinking about trying just that. As a pilot – we can generate Darwin Core Archive files from OTT and EOL which would contain information about IDs from other sources. Then we will need to add an API that would make it possible to run queries from such information.

Another good suggestion from @jhpoelen – is to publish data about this kind of crossmap as a csv file, that can be easily put in some kind of a database and used separately on its own.

Alexander Myltsev

Global Names is happy to welcome a new member – Alexander Myltsev (@alexander-myltsev on GitHub). Alexander is of parboiled2 fame. Parboiled2 is a Parsing Expressed Grammar parser for Scala, and it did originate from Alex’ code which he wrote as a Google Summer of Code participant in 2013.

Alexander Myltsev

Alexander lives in Moscow and currently works on a port of biodiversity parser to Scala; the project is called gnparser. The new parser is compatible with Java, JRuby, Jython and everything else written for Java virtual machine environment. When the parser is ready it will be the basis of a new Scala-based collection of GN tools.

Alexander had been working with us for a few months now, but I had been waiting with the announcement until major paperwork hurdles were solved.

Sysopia release v0.5.0

Last week was the end of the Google Summer or Code season. Out of two projects that we had been mentoring one was not really about biology. It was a project for system administrators. A visualization tool for staistics about cpu usage, memory, disk space etc.

Sysopia

Everybody who runs involved biodiversity informatics projects knows how important it is to monitor your systems. There are several open source tools for that – Nagios, Sensu, Graphite, Systemd, Collectd…

Our monitoring system of choice is Sensu. It is very flexible and powerful tool, well designed and suitable for large number of tasks. One of these is collecting statistics from computers and store them in about any kind of database. As a result Sensu can be used for monitoring critical events and for collecting data about systems. The question is however how to visualize all the collected data.

We designed Sysopia to do exactly that. During the summer @vpowerrc expanded original prototype and created powerful and flexible visualization tool which is capable to give system administrator an understanding what is happening with 2-20 computers at a glance, receive life updates, and compare today’s statistics with up to one year of data. We already use Sysopia in production, and we are going to deploy it for Global Names as soon as our new computers are in place.

You can read more about sysopia on its help page

Site globalnames.org merged with GN blog

For quite a while we used to have a Drupal-based site for GlobalNames. As we do have now a Jekyll-based blog, it was logical to move our static site as well. And now it did happen – both of them are accessible via globalnames.org

This new site will continue to be an ‘official’ blog for news about GNA, we will publish information about new releases of software here, documents and discussions about scientific names.

One great thing about this move – it is possible for anybody with account at github to participate – if you want to add a document, or a blog item – just fork the repository, add a post to the _posts directory and send a pull request. At some point we will add detailed instructions how to do that.

GN Parser v.3.4.1

New version 3.4.1 of GlobalNames Parser gem biodiversity is out. It adds ability to parse authors names starting with d' like Cirsium creticum d'Urv. subsp. creticum

which now is parsed correctly

GN Parser v.3.4.0

New version 3.4.0 of GlobalNames Parser gem biodiversity is out. It adds new method that allows to add infraspecific ranks to canonical forms after the fact.

It was possible to add ranks in canonical forms before using the following code:

require "biodiversity"
parser = ScientificNameParser.new(canonical_with_rank: true)
parsed = parser.parse("Carex scirpoidea subsp. convoluta (Kük.)")
parsed[:scientificName][:canonical]
#output: Carex scirpoidea subsp. convoluta

Now it is also possible to add ranks to canonical forms after the fact using static method ScientificNameParser.add_rank_to_canonical

require "biodiversity"
parser = ScientificNameParser.new
parsed = parser.parse("Carex scirpoidea subsp. convoluta (Kük.)")
parsed[:scientificName][:canonical]
#output: Carex scirpoidea convoluta
ScientificNameParser.add_rank_to_canonical(parsed)
parsed[:scientificName][:canonical]
#output: Carex scirpoidea subsp. convoluta

Catalogue of Life Meeting in Champaign/Urbana

There was a short meeting about Catalogue of Life future directions organized at Species File Group at Champaign/Urbana. Concerning Global Names it was a very productive meeting. It was great to understand the current state of Catalogue of Life, to see that CoL is not loosing momentum inspite of financial problems of biodiversity informatics in general. There was definitely interest in creating more bridges between various projects.

Yuri Roskov did present a ‘pilot’ project of cooperation between Encyclopeia of Life species pages group and CoL. Data about ~2000 species of scorpions had been harvested from html-based site to be used in both projects. I think it was a great exercise and I do hope it will be just the first example of such cooperation.

From the point of Global Names there were good news too. I think it was everybody’s feeling that Global Names resolution is an important complementary service for Catalogue of Life. Cooperation between various biodiversity projects was brought up again and again, and organizing biodiversity infrastructure as a mix of several projects where GBIF, EOL, CoL, GN etc work as modules of a bigger puzzle, complement and enhance each other.

One thing that was brought up is lack of nomenclatural component in GN. I talked about our plans to integrate GN Usage Bank and GN Resolver and demonstrate the flow of nomenclatural data into resolution/reconciliation process. We will try to make such connection by November and demonstrate the workflow on upcoming GBIF/CoL workshop.

Global Names Parser v3.2.1

New version of Scientific Name Parser is out

This release has some backward compatibility issues with output.

Field “verbatim” is not preprocessed in any way

In previous versions we did strip empty spaces and new line characters around the name to generate “verbatim” field. Now name stays the way it was entered into the parser.

Old behavior:

“Homo sapiens “ -> …“verbatim”: “Homo sapiens”

“Homo sapiens\r\n” -> …“verbatim”: “Homo sapiens”

New behavior:

“Homo sapiens “ -> …“verbatim”: “Homo sapiens “

“Homo sapiens\r\n” -> …“verbatim”: “Homo sapiens\r\n”

Global Names UUID v5 is added to the output as “id” field

{
    "scientificName": {
        "id": "16f235a0-e4a3-529c-9b83-bd15fe722110",
        "parsed": true,
        "parser_version": "3.2.1",
        "verbatim": "Homo sapiens",
        "normalized": "Homo sapiens",
        "canonical": "Homo sapiens",
        "hybrid": false,
        "details": [{
            "genus": {
                "string": "Homo"
            },
            "species": {
                "string": "sapiens"
            }
        }],
        "parser_run": 1,
        "positions": {
            "0": ["genus", 4],
            "5": ["species", 12]
        }
    }
}

Read more about UUID v5 in another blog post

Names with underscores instead of spaces are supported

Such names are often used in representations of phyo-trees. Parser now substitutes underscores to spaces during normalization phase

Normalized canonical forms do not have apostrophes anymore

I am removing behavior introduced in v3.1.10 which would preserve apostrophes in normalized version of names like “Arca m’coyi Tenison-Woods”. Apostrophes are not code compliant.

New behavior:

{
    "scientificName": {
        "id": "b3a9b1a3-f73c-5333-8194-a84c6583d130",
        "parsed": true,
        "parser_version": "3.2.1",
        "verbatim": "Arca m'coyi Tenison-Woods",
        "normalized": "Arca mcoyi Tenison-Woods",
        "canonical": "Arca mcoyi",
        "hybrid": false,
        "details": [{
            "genus": {
                "string": "Arca"
            },
            "species": {
                "string": "mcoyi",
                "authorship": "Tenison-Woods",
                "basionymAuthorTeam": {
                    "authorTeam": "Tenison-Woods",
                    "author": ["Tenison-Woods"]
                }
            }
        }],
        "parser_run": 1,
        "positions": {
            "0": ["genus", 4],
            "5": ["species", 11],
            "12": ["author_word", 25]
        }
    }
}

New UUID v5 Generation Tool -- gn_uuid v0.5.0

We are releasing a new tool - gn_uuid to simplify creation of UUID version 5 identifiers for scientific name strings. UUID v5 has features which are particular useful for the biodiversity community.

UUID version 5: Description

Universally unique identifiers are very popular because for all practical purposes they guarantee globally unique IDs without any negotiation between different entities. There are several ways how UUIDs can be created:

UUID version Uniqueness is achieved by
version 1 Using computer’s MAC address and time
version 2 like v1 plus adding info about user and local domain
version 3 Using MD5 hash of a string in combination with a name space
version 4 Using pseudo-random algorithms
version 5 Using SHA1 hash of a string in combination with a name space

UUID v5 is generated using information from a string, so everyone who uses this method will generate exactly same ID out of the same string. Interested parties do need to agree on generation of a name space, and after that no matter which programming language they use – they will be able to exchange data about a string using their identifiers.

This gem already has a DNS domain “globalnames.org” defined as a name space, so generation of the UUID v5 becomes simpler.

I believe UUID v5 creates very exciting opportunities for biodiversity community. For example if one expert annotates a string or attaches data to it – this information can be linked globally and then harvested by anybody, without any preliminary negotiation.

Quite often researches make an argument that a scientific name is an identifier on its own and there is no need for another level of indirection like UUID. They are right, scientific name string can be an identifier, however, scientific names have severe shortcomings in such a role.

Why Scientific Names are Bad Identifiers for Computers

Scientific name strings have different length

More often than not identifiers end up in databases and used as a primary index to sort, connect and search data. Scientific name strings vary from 2 bytes to more than 500 bytes in length. So if they are used as keys in database they are inefficient, they waste a lot of space, become less efficient for finding or sorting information – indexes key size is usually determined by the the largest key.

UUIDs have always the same, rather small size – 16 bytes. Even when UUIDs are used in their “standard” string representation – they are still reasonably small – 36 characters. Storing them in a database as a number is obviously more efficient.

It is hard to spot differences in name strings

It is very hard for human eye to spot the difference between strings like this

  • Corchoropsis tomentosa var. psilocarpa (Harms & Loes.) C.Y.Wu & Y.Tang

  • Corchoropsis tomentosa var. psilocanpa (Harms & Loes.) C.Y.Wu & Y.Tang

Much easier for their corresponding UUIDs

  • 5edecb2b-903f-54f1-a087-b47b3b021fcd

  • 833c664b-7d00-5c3b-97a4-98b0ab7d0a9a

Name strings come in different encodings.

Currently Latin1, UTF-8 and UTF-16 are most popular encodings used in biodiversity. If authorship or name itself has characters outside of the 128bits of ASCII code – identically looking names will be quite different for computers.

Name strings are less stable because of their encoding

When names are moved from one database to another, from one paper to another sometimes they do not survive the trip. If you spent any time looking at scientific names in electronic form you did see something like this:

  • Acacia ampliceps ? Acacia bivenosa

  • Absidia macrospora V�nov� 1968

  • Absidia sphaerosporangioides Man<acute>ka & Truszk., 1958

  • Cnemisus kaszabi Endr?di 1964

Usually names like these had been submitted in a “wrong” encoding and some characters in them were misinterpreted. UUID on the other hand is just a hexadecimal number, which can be transitioned between various encodings more safely.

Name strings might look the same in print or on screen, but be different

  • Homo sapiens

  • Homo sаpiens

These two strings might look exactly the same on a screen or printed on paper, but in reality they are different. Here are their UUIDs:

  • 16f235a0-e4a3-529c-9b83-bd15fe722110

  • 093dc7f7-5915-56a5-87de-033e20310b14

The difference is that the second name has a Cyrillic а character, which in most cases will look exactly the same as Latin a character. And when the names are printed on paper there is absolutely no way to tell the difference. UUID will tell us that these two name strings are not the same.

Nothing prevents to continue to use name strings for human interaction

One argument that people often give – it is much easier for users to type

http://biosite.org/Parus_major

than

http://biosite.org/47d61c81-5a0f-5448-964a-34bbfb54ce8b

For most of us it is definitely true and nothing prevents developers to create links of the first type, while still using UUIDs behind the scene.

Why UUIDs v5 are better than any other UUIDs as a scientific name identifier

  • They can be generated independently by anybody and still be the same to the same name string

  • They use SHA1 algorithm which does not have (extremely rare) collisions found for MD5 algorithm

  • Same ID can be generated in any popular language following well-defined algorithm

Crossmap Tool v0.1.5

New version of gn_crossmap tool is out.

Global Names Crossmap tool allows to match names from a a spreadsheet to names from any data source in Global Names Resolver.

The main change in this version – output file with crossmap data now contains all fields from original input document and it allows to filter and sort data using any field from the input.

Other changes are

  • @dimus - #5 - All original fields are now preserved in the output file.

  • @dimus - #3 - If ingest has more than 10K rows – user will see logging events

  • @dimus - #4 Bug - Add error messages if headers don’t have necessary fields

  • @dimus - #2 - Header fields are now allowed to have trailing spaces

  • @dimus - #7 Bug - Empty rank does not break crossmapping anymore

  • @dimus - #1 Bug - Add missing rest-client gem

iDigBio API Client v0.1.1

In a few weeks there will be an iDigBio API hackathon. As I menioned earlier we decided to add another API client written in Ruby before the hackathon starts. And Greg Traub and I are releasing iDigBio API Client written in Ruby today.

Greg started to make a Ruby client at iDigBio. I took his code and refactored it into a Ruby gem. So now instead of 0 we have 2 Ruby clients :smile:

This is the very first release, so if you will start using it and find something is wrong/missing please submit an issue. The gem uses beta API, so sometimes it might get ‘stuck’. This problem will go away when beta API will move to production.

Scientific Name Parser v3.1.10

New version of Scientific Name Parser is out

Addressing most of Issue #7

Do not parse non-virus names containing RNA

If a name was not detected as a virus but contains RNA word it will not be parsed anymore. It is a problem for some surrogate names, like Candida albicans RNA_CTR0-3 but they are very rare.

Name Action
Candida albicans RNA_CTR0-3 Not parsed
Alpha proteobacterium RNA12 Not parsed
Ustilaginoidea virens RNA virus Not parsed, marked as virus
Calathus (Lindrothius) KURNAKOV 1961 Parsed as before

Better detection of virus names

Names containing virophage *NPV, *sattelite, *particle are marked as ‘viruses’ and not parsed

Gossypium mustilinum symptomless alphasatellite
Okra leaf curl Mali alphasatellites-Cameroon
Bemisia betasatellite LW-2014
Tomato leaf curl Bangladesh betasatellites [India/Patna/Chilli/2008]
Intracisternal A-particles
Saccharomyces cerevisiae killer particle M1
Uranotaenia sapphirina NPV
Spodoptera exigua nuclear polyhedrosis virus SeMNPV
Spodoptera frugiperda MNPV
Rachiplusia ou MNPV (strain R1)
Orgyia pseudotsugata nuclear polyhedrosis virus OpMNPV
Mamestra configurata NPV-A
Helicoverpa armigera SNPV NNg1
Zamilon virophage
Sputnik virophage 3

Better handling of species/infraspecies epithets with apostrophe

Names like below are now parsed correctly. Their normalized/canonical forms preserve apostrophe

Junellia o'donelli Moldenke, 1946
Trophon d'orbignyi Carcelles, 1946
Arca m'coyi Tenison-Woods, 1878
Nucula m'andrewii Hanley, 1860
Eristalis l'herminierii Macquart
Odynerus o'neili Cameron
Serjania meridionalis Cambess. var. o'donelli F.A. Barkley

Global Names Grant Goes to Illinois

I did some soul-searching, advise-gathering, thinking, planning, and crystal ball gazing. And it seems that moving Global Names grant and myself to Species File Group is a right decision. Why? Because Marine Biological Laboratory is a hard core research institute, which completely depends on grants and as such is not well-suited for infrastructure projects. Global Names is definitely an infrastructure project and I know very well how bad it is to be responsible for an infrastructure project and not being able to work on it. It is just not a good way to do business.

It is my 8th year at MBL. I enjoy MBL, I love living on Cape Cod. I love an immense energy of MBL collective mind. I met really amazing people, amazing scientists here. I worked with great people at Encyclopedia of Life project. And also I was never sure if I will be there next year, or sometimes next month. I had weeks and months when I had no ability to move forward with Global Names, because it had no financial support at that time.

Species Files Group is long term financed, allowing a long term commitment. They are interested in Global Names, they do want me to continue to develop it, integrate it with Catalogue of Life. And these are my goals too. David Eades understands that Global Names will need a long-term investment in hardware, and he provides a generous annual fund for that. It means no more 7 year old computers running Global Names services. I also hope it will help to integrate Global Names Usage Bank, a crucial GN component developed by Rich Pyle and Rob Whitton.

Another big factor is ability to work closely with programmers and taxonomists of the SFG group. At MBL I am now the only one on EOL project (Jeremy is remote), and I feel I am getting stale without nomenclators/taxonomists around.

Of course we need to figure out how to move current GN computers without shutting down services for a few weeks. I imagine I would have to rent an expensive cloud setup for a month or two, and run GN from there while machines are in transit. We will have to figure out how to transfer grant, make a new hire for the project etc. But all of these are good problems to solve. I believe GN suddenly got a brighter future ahead.

New tool to crossmap checklists

Yesterday I released a new command line tool for name resolution called gn_crossmap. It is designed for people who work with checklists of scientific names using a spreadsheet software (MS Excel, Apple Numbers, Open Office, Libre Office, Google Sheets etc.) and want to compare names that they have with another reference source. The program takes a spreadsheet saved as csv file as input and generates another csv-based spreadsheet with resolution data. Examples of input and output are included into the code. README file describes how to use the project from a command line or as a Ruby library.

This program requires internet connection, Ruby >= 2.1 installed on the machine.

Basic usage is:

$ gem install gn_crossmap
$ crossmap -i input.csv -o ouput.csv -d 1

where


short long attr Description
-i –input checklist’s spreadsheet saved as csv file
-o –output path to the output file. Default is output.csv in the current directory
-d –data-source-id ID of one of the GN Resolver data sources, Catalogue of Life id (1) is default

Web interface to this program is also in works

This project started at the Catalogue of Life workshop in Leiden, which happened in March 2015. The main focus of the hackathon was to figure out how to help national checklist teams to create, maintain and compare data in their data. We determined 3 main approaches

  1. Crossmapping checklists against other checklists and/or reference sources
  2. Annotation of crossmapped data – ability to share metadata, report mistakes
  3. Distribution of species – how to fix occurance errors for a country

A hackathon group which worked on crossmapping produced a code which would compare checklists against Catalogue of Life. The gn_crossmap program I am releasing is based heavily on what we learned during the hackathon. Crossmaping code is mostly based on use cases from Rui Figueira and Wouter Koch. During the hackathon we also determined ways to improve quality of name resolution further by:

  • Using infraspecies’ rank (var., f. subsp. etc) in the matching and penalize score if ranks are different
  • Taking in account if matching authors are basyonym or combination authors
  • Using meta-information attached to names via sensu…, not … etc. to distinguish name usages

Kickoff Meeting for Disseminating Phylogenetic Knowledge Project

Yesterday Arlin Stoltzfus organized a kickoff meeting for the project that got funded by NSF this year - “Collaborative Research: ABI Development: An open infrastructure to disseminate phylogenetic knowledge”. Global Names is participating in project and I believe it will be an interesting ride.

The idea behind is pretty cool. Imagine that someone works on a group of organisms. They submit names of the organisms to a service and the service builds a phylogenetic tree out of the names. When tree is created it will start its own life similar to a repo on Github. People will be able to reuse it, annotate it for their own purposes, create derivative trees. It would be a pretty nice feature for Encyclopedia of Life to see how species belonging to a particular clade are related to each other through phylogeny. One problem with creation of such trees is name normalization. Scientific names can have many alternative spellings, so to find phylo-information we will need to be able to map names from user list to names which are recognized by the service.

I suspect that a crossmapping tool I am working on this week might be adjusted for this particular task, but as usual – the devil is in details and we will find out the requirements during the design process.

New Higher Level Classification for Catalogue of Life

Bob Corrigan sent around an email pointing at a paper in PLOS which describes new classification adopted by Catalogue of Life. After looking through the paper my understanding is that it is a step forward and at the same time business as usual for CoL.

Catalogue of Life needs a solid managerial classification for their data and according to article the goal is achieved:

Our goal, therefore, is to provide a hierarchical classification for the CoL and its contributors that (a) is ranked to encompass ordinal-level taxa to facilitate a seamless import of contributing databases; (b) serves the needs of the diverse public-domain user community, most of whom are familiar with the Linnaean conceptual system of ordering taxon relationships; and (c) is likely to be more or less stable for the next five years. Such a modern comprehensive hierarchy did not previously exist at this level of specificity.

Classifications are a dirty business so as usual –

These actual complexities of phylogenetic history emphasize that classification is a practical human enterprise where compromises must be made

Altogether looks like CoL gets a new hierarchical face.

Starting GSOC 2015 Sysopia Project

Our Google Summer of Code student – Viduranga Wijesooriya and I had our firts meeting today to start Google Summer of Code project – Sysopia. The purpose of the project is not names, however I do consider it to be important for EOL and for GN as it allows to spend less time on administration of computers and more time on writing code.

The idea behind the project is to create a dashboard that allows us to see what is going on with all computers in a system with one glance. The system shows several metrics graphs, each of which shows information about all machines at the same time. By default it shows data for 24 hours, so if everything works well it is enough for sysadmin to check out sysopia once a day to have a very good idea about what is happening with the system from the moment Sysopia is installed. We did install it for EOL and I find it very useful.

sysopia

Not much functionality is there yet, but graphs show well, and it is possible to get a point data by hovering over a line, and highlight a particular machine when hovering over the machine name in the dialog box.

Currently the only backend for sysopia is Sensu but we are going to expand it to other backends after we nail down the user interface.

Species File Group -- new home for GN?

Trying to find a permanent home for GN I travelled to Champaign-Urbana on Thursday-Friday to visit Species File Group at the University of Illinois. I do know this group rather well, as Lisa Walley and I went there more than a year ago for a hackathon organized by Matt Yoder.

I am quite impressed with work this group does and when Matt suggested me to join them – my first thought was – this might be a way to make Global Names financial situation more reliable!

Currently Global Names completely depends on grants, and grants come and go. It is a pretty bad way to finance an infrastructure project, as you do not want to have roads or electricity depend on unstable funding. We always want to be able to drive to a store or a concert, and we want to always be able to have lights in our homes. Same goes with projects like GN. If people start using them – they start depend on them and it is really bad situation when funding dries out and service deteriorates as a result.

The visit went well. I had a great opportunity to talk to David Eades, Matt, Dmitry Dmitriev, and Yuri Roscov. It was especially great to talk to Yuri, as he is the main person behind the Catalogue of Life content. I consider Catalogue of Life to be one of the most important use cases and partners for GN, and talking to Yuri for an extended amount of time was extremely helpful.

Originally position was about helping Yuri to automate his work-flow, however when Matt and I talked on Skype the accent started to shift towards supporting Global Names. David Eades, Matt, and Yuri all believe that GN is a missing link in Catalogue of Life functionality and as such by working closely with Yuri, and figuring out what GN can do for Catalogue of Life actually does help to automate some of hard parts of Yuri’s work.

The meeting was very encouraging and inspiring. Now I have to think hard and make a decision. I would love be able to keep my house on Cape and I would love to be able to come in summer and work with MBL and RISD. And it seems nothing prevents me to spend 3 months on Cape in summer if I move. On Monady I am going to talk about my trip at my work at MBL.

Good names, bad names?

Just had a great conversation with Jorrit Poelen from GloBI. Jorrit uses GN Resolver to clean up names for Globi, and on top of knowing that name exists in GN Resolver he also needs to know if the name is actually valid.

Resolver by design contains ‘good’ and ‘bad’ names. We do need to know what kind of misspellings exist in the wild and map information associated with them to good names. These misspellings and outright wrong names make Jorrit’s life much harder, as we do not have a tool that clearly marks ‘good’ names as good. There are ways to be more or less sure if a name is good:

  • If names belong to a specific clade – use only highly curated sources
  • Count how many sources know about a name
  • See if a name appears at least in one curated source
  • Check if name got parsed

But all these approaches are not universal, and do not give a clear answer. So what would be a solution?

It seems that a good solution would be to write a classifier which takes in account all relevant features and meta-features of a name, considers them and then puts the name into ‘good’ or ‘bad’ bucket. Every name has several features associated with it and we can train a Bayes classifier to make a decision if name is ‘good’ or ‘bad’ using these features. When it is done running through our ~20 million names – each of them will be marked as trusted or not.

I am pretty much sure that such classifier, especially at its first iteration will make mistakes. How can we deal with them? Here is an idea – when API returns back data back to a user – data will have two new fields – ‘trusted’ as yes/no and a URL to complain about this decision something like:

http:/resolver.globalnames.org/trusted?name_id=123&wrong_value=1

People can just copy and paste this URL into browser, or set it as a “Report a mistake” button for every name in their results html. If this button is pushed GN Resolver will register human curation event and data from this event will be used to improve performance of the classifier algorithm. Human curations will trump computer algorithm and they can be collected in a new data source for feedbacks…

Details of the interface can be decided later when we build the classifier. I know that the problem of separating trusted names from untrusted is a task that about everybody who uses resolver actively asked me about one time or another. So who and when can build it? And now I am thinking that our Google Summer of Code student might be interested in making it happen instead of improving NetiNeti. I personally think automatic curation of names is more important.

Jorrit submitted an issue about this idea Globit issue

iDigBio hackathon preparation

On June 3rd I am going to iDigBio hackathon meeting which will be about finding ways to enhance their API. Today there was a pre-hackathon meeting where iDigBio folks explained how did they implement their API, it’s backend, and how do they use their own API for their GUIs.

I had been very impress with what they have done. Backend is based on Elastic search, API is RESTful and json based. What was a surprise for me – the API calls are often take pure json as arguments. It was also great to see how did they simplified Elastic search queries for API, keeping their API queries simple and powerful at the same time.

They also made Python and R clients for the API. So I will try to make Ruby version of the API client before the hackathon.

Launching NetiNeti Google Summer of Code Project

Today we had our first meeting to start NetiNeti enhancement project funded by Google Summer of Code. The studend who was selected do do the job is a 4th year graduate from University of Philadelphia Wencan Luo.

The purpose of the project is to improve performance of our NLP-based scientific name finding tool – NetiNeti, developed 5 years ago by [Lakshmi Manohar Acella][lakshmi]. Lets see how it will go…

Official coding time starts on May 27th – for now we are going through a design phase – figuring out who are the users of the application, then we will try to do idealized design of its features, find implementaion paths and limitations existing to implement the features and then Wencan is going to do exploration of features.

For the process we are going to use ZenHub to manage issues, obviosusly GitHub for the code and ability to have a project related blog with Github and Jekyll.

Google Summer of Code 2015 starts

Today is the official start of Google Summer of Code work. Up to now organizations submitted their ideas, organizations had been chosen by Goggle, students decided which ideas and organizations they like and submitted their proposals, Google decided on how many projects from every organizations they are willing to fund, and finally – the best proposals from students were matched to the funded ideas. Encyclopedia of Life and Global Names submitted 4 proposals and we got 3 of them funded, so congratulations to us and to students :)

Student: Avinash Daiict
Mentor: Amr Morad

DevOps Dashboard Sysopia

Student: Viduranga Wijesooriya
Mentor: dimus

Finding Scientific Names

Student: Wencan Luo
Mentor: dimus

I had been very happy to see how many people were interested in the idea of finding scientific names in texts – we had 12 proposals, so competition was fierce this year! I think we got great students and I am looking forward to the Google Summer of Code 2015.

Gracious gift from Encylopedia of Life

Encyclopedia of Life folks made a truly glorious present to Global Names consisting of several Dell 710 servers which were used for running EOL at Harvard. Now, when site moved to Smithsoninan EOL donates some of these computers to GN, and 10 of them are already at Marine Biological Laboratory, waiting to be plugged to internet and electricity. Another truly amazing gift from EOL is 14 100gb hard drives which will run GN databases.

I feel happy warm and fuzzy – thank you EOL! I hope with this new hardware I will be able to increase GN capacity about 5x using current code! Next step – installing Chef, Docker, and GN applications to serve the biodiversity community.

GNA work continues

Last year we did get the second round of funding for Global Names Architecture development. Our first grant had been about exploration of how to find scientific names in texts, how to crossmap different spelling variants of the same names to each other, how to connect names to literature collected at Biodiversity Heritage Library, how to organize scientific name usages, how to register new zoological scientific names electronically. Several intersting projects spanned out of this effort and you can read about them at Global Names site.

It was a hard year for Encyclopedia of Life where I work, and for Global Names. I did have to spend most of the first 8 months since our second NSF grant got funding helping EOL with system administration support and transfer EOL site from Harvard to Smithsonian Museum of Natural History. It is done now, and I am happy to be able to work on Global Names project again!

What kind of resources do we have now? 2 months of Paddy (David Patterson) time, 2 months of Rich Pyle time, About a 1.5 years of mine, and 1 year of another developer. We also got 2 excellent participants for Google Summer of Code this year, so it is 6 months of their time as well. And a quest for further funding continues as I write.

Encyclopedia of Life kindly donated a lot of hardware, and Marine Biological Laboratory provides us with a whole rack of space, fast internet connection. So we are set for an exciting year ahead!

What are the plans? This grant covers work on name finding and name resolution. We try to find major use-cases (Arctos, EOL, iDigBio, Catalogue of Life, GBIF) and satisfy their needs. We expect it will cover needs of 90% of other users, and the remaining 10% of functionality will trickle by means of going through github issues, fixing bugs, adding features, thinking about new ideas.