AI has dominated the research landscape, unleashing the fourth industrial revolution.
This work consists of using Neural Concept [1] Discovery to learn new vocabulary throughout the Linked Open Data (LOD) and thus increase the Neural SPARQL Machine accuracy as well as any other model relying on DBpedia to perform natural language processing (NLP).
[1] Google “an idea or mental image which corresponds to some distinct entity or class of entities, or to its essential features, or determines the application of a term (especially a predicate), and thus plays a part in the use of reason or language.”
Description
AI has dominated the research landscape, unleashing the fourth industrial revolution.
This work consists of using Neural Concept [1] Discovery to learn new vocabulary throughout the Linked Open Data (LOD) and thus increase the Neural SPARQL Machine accuracy as well as any other model relying on DBpedia to perform natural language processing (NLP).
[1] Google “an idea or mental image which corresponds to some distinct entity or class of entities, or to its essential features, or determines the application of a term (especially a predicate), and thus plays a part in the use of reason or language.”
Goal
Create a Nerual Concept Expansion Dataset
Impact
Enable better Translation, Search, QA and Link Discovery throughout better concept mappings.
Hi, there. I’m a GSOC student (undergraduate at UMass Amherst) that’s pretty interested in this idea too. Is this more geared towards Query Expansion (i.e. the IR problem) or Ontological Concept Expansion (using logical programming like Prolog and predicate learning)? The papers seem closer to the former, but I’m excited to work with either if I get accepted, but I’m also familiar with the latter from reading approaches done to create CMU’s NELL project and other really cool projects. !