Job Opportunities

Submit job offers, open-positions, internships and volunteer work. Find the PhD candidates, developers and other DBpedians for the work you need.

Hi Sandra,

I am still looking for suitable volunteer work, and will have plenty of time now from four weeks on. What do you suggest that could fit in with my capabilities? (Not too much programming, but data structures is OK)

Hi Gerard,

thank you for your engagement :slight_smile: That is much appreciated. First of all, you could oversee the Forum as a moderator and help us to keep the topics answered and the community happy.

If you like to moderate a specific category, that’s fine with us. Just let us know.

Additionally, I bet @kurzum has things on his mind that are very much suitable for you.

All the best


Hi @gerardkuys, I was actually counting on you. Let me sketch what kind of work you might like:

Lessons learned (Growth and maintainability)

As you know was quite good at the beginning, but then growing it more and maintaining it is hard to a point, where it is infeasible to progress. The reason here is that the details in Wikipedia get too heterogeneous after a while and you have to invest more and more work, getting less results. So we tried to focus on property mappings to existing open RDF datasets from LOD. This seemed quite effective. DNB use case: mapping gndo:dateOfBirth to dbo:birthDate brings in millions of triples into DBpedia there are also a lot of same as links already. See here:

Lessons Learned (Tangibility/Ontograte)

We focused a lot on the class hierarchy in the past regarding the ontology, but after all these years of discussion, there was hardly any consenus to move forward. It seeems that the best way is to keep the core flat and minimal as it is and do some improvements. If somebody needs a richer ontology theyy can just fork it or create an additiional one. DBpedia has 6-8 different taxonomies already, so it wouldn’t harm us to have 20 more. That is also a possibility to transform yyour contribution from back then into an add-on, rather than change the core. The thing that seemed feasible in the past was integrating and map the properties as described above. So we can just focus on this.
The main lesson learned regarding property mappings is that it is easy for some and hard for others. We are trying to separate these with the Ontograte methodology.
birthdate is an easy example. Persons are well defined (tangible) and can be sameAs linked. Birthdate also should be unique and stable over time. I am sure you can find a million arguments where there is uncertainty, but this uncertainty is related to 0.01% of Persons in historical contexts. For 99.9% mapping should work.
Another example are books vs. papers. For books there are several abstract identities and translations, but normally you know that reasearcher x has published y papers. So one is infeasible to map, while the other could result in well-curatable space for references, just by doing some same as links and equivalentProperty statements.

Future platform
gave us their mapping. We used it to built the new DBpedia knowledge graph including DNB, MusicBrainz and Geonames, some (stats)[]

So what we would like to do is create an effective tool to map KB data and other national libraries to create an ultimate graph about European authors or worlwide authors.
This ultimate graph should be centrally curated, but used to improve KB, DNB, Wikidata, Wikipedia, by simply helping the sources to collaborate.
We are also building this for other domains such as energy and power plants or medicine. We would also use these mappings to make the Dutch DBpedia richer.
Would this interest you?

Hi Sebastian and Sandra,

Yes, it would! Let me have a closer look at the details first and then I’ll let you know how I feel about it. But by the looks of it, be sure to count me in. I shall have a look at the moderator request as well.

Right now I am busy wrapping up at the Dutch National Police, so please do not expect too much at short notice.

All the best,


Excellent, so the first batch will be something like persons from GND, KB, lobid, Orcid, Viaf,, swissbisb, reroid. First the mapping and then we are trying to debug the links based on the dicrepancies like disparate birthdates. We will also need four weeks to get a debug stats and curation interface ready.