An abstractive summarization model based on DBpedia


The task of creating a textual summary from a given text is one of the tasks that large language models can afford. However, there are the so-called “hallucinations” when the summary has Named Entities not shown in the training text.
In general, we can use the structured knowledge of DBpedia to enhance language models. By now we will focus on text summarization.


This project aims at using DBpedia, in particular DBpedia Spotlight, to provide:

  1. A new metric “hallucination index” that measures the quality of a language model in terms of number of hallucination per document (or per 1000 documents).
  2. A “equivalence list” of terms. For instance, “the president os the US” is equivalent to “Biden” if the text is in the temporal dataframe 2021/01/20-today.
  3. A summarization model for different languages in the DBpedia.


If we get better models we will show the potential of using structured information (knowledge graphs like DBpedia) to enhance language models.

Warm up tasks

Read these papers and get the key points:

  1. Entity-level Factual Consistency of Abstractive Text Summarization (by Amazon people)
  2. SKGSUM: Abstractive Document Summarization with Semantic Knowledge Graphs (Pekin University)


Mariano Rico

Project size



NLP, NMT (neural machine translation), abstractive summarization, knowledge extraction.

Hi, I am interested in working on this project for GSOC 2023.

I have read the two papers linked. I see that the first: Entity-level Factual Consistency of Abstractive Text Summarization introduces two new metrics for measuring factual consistency. I see that the second: SKGSUM: Abstractive Document Summarization with Semantic Knowledge Graphs presents an architecture for summarization which models text using graphs.

In terms of how this fit in with the proposal mentioned above:

  • Do the two metrics serve as a basis/inspire a hallucination index?
  • Is the architecture a model for how information that is in graph form (i.e. knowledge graphs) can be leveraged for summarization?

Please let me know what the next steps would be. Should I write a pre-proposal? How sketched out would you like this to be before I send it to you?


Thanks Nadia for your interest in the proposal. Yes, the idea is to implement these metrics. Concerning the KGs, the idea is how to transfer that information into the summarization model.



I am interested in this proposal since it follows my line of work, i have published about Spanish abstractive sumarization and I am currently working on methods to reduce entity hallucination.

Greetings Respected sir

I am Arinjay Pathak, a final year student in bachelor of engineering (B.E.) . I have been interested in machine learning, deep learning and natural language processing, and have worked on projects in these domains. I am a beginner in open source and I want to work on summarization. I have won Smart India Hackathon, organized by the government of India where I had build a semantic search using sentence Transformers. My other projects include building speech emotion recognition on RAVDESS dataset using ensembling model techniques.

I have also worked on text classification problems where I achieved 95%+ accuracy on UCI spam classification dataset.

I am familiar with theoretical and practical aspects of traditional NLP and large language models, statistics, which will help me with providing the new metric “hallucination index”, as well as providing “equivalence list” and multilingual summarization model.

I am attaching my resume for your reference.

Greetings @mariano_rico, I have completed the warm up tasks, what should I be doing next?

Hey @mariano_rico! I am Aditya Hari, a graduate student at IIIT-H. My research focuses on NLP, with the current focus on data to text generation. I have plenty of experience in using LLMs, KGs etc for different NLP applications. I have gone through the papers in the warm up tasks and feel that I have a good grasp of the ideas and methods discussed. My current research thread of fact to data generation also has a special emphasis on addressing hallucinations, which is very relevant for this project.

Would appreciate your input on what to proceed with after the the warm up task

Good Morning
I am Anuja, a UIC NLP Ph.D. student interested in working on GSOC idea of abstractive summarization model based on DBpedia.
I went through the two warm-up papers on SKGSUM where authors created entity graph, sentence graph and discourse graph to abstractively summarise single document. Whereas in the other paper, the authors try to quantify entity factual consistency of generated summary by proposing 3 metrics of precision-source/target and target recall metric and proposed 3 ways to improve these metrics.
I have a few doubts-

  1. For the GSOC task, do we have to use this or a similar model to build a summarisation model for different language? And on which language should we first focus among all different languages present?
  2. Is there any code available on previous work done, or do we have to build upon the warm-up task model?
  3. For building the summarisation model, will we use CNN/DM, XSUM public dataset, or is there any other dataset? If we are using CNN/DM/XSUM dataset, are we using DBpedia to enhance the summarisation?
  4. In the paper- Entity level Factual Consistency of Abstractive Text Summarisation paper, the authors only proposed entity-level metrics, whereas relation-level hallucination can also be present. Will the project also include relation-level hallucination metric?
  5. Any guidance on writing the proposal? Or should I draft one, and you could mentor me?

Thanks for taking out time.

Hi, @mariano_rico. I am Amogh Joshi, and currently, I am pursuing my BTech in Data Science and Artificial Intelligence from Indian Institute of Technology, Bhilai. I am interested in doing Abstract summarization as a GSoC project.
I have done courses such as Machine Learning and Natural Language Processing, and I am interested in Natural Language Processing using Deep Learning and Machine Translation.
I have worked on the following projects:

  • Research Paper title Generator → In this project, I have used the arXiv dataset from Kaggle and generated titles for Research Papers using the paper’s abstract by various Transformer Models such as BERT, T5, etc. This was a text summarization task using Transformer models.
  • Wikipedia Graph analysis → In this project, I did text extraction of Wikipedia pages, applied different text cleaning techniques, and applied various NLP models to get embeddings of the Wikipedia pages.
  • Various Deep learning projects such as Emotion Recognition using CNN.

Because of all these projects and all the research papers that I have read, I have gained extensive interest and knowledge in NLP and Transformer Models. I am also familiar with the statistics and mathematical aspects of NLP, which may help me contribute to the new metric Hallucination Index. I have completed the warm-up tasks provided by you, and I would certainly like to know more about the project.

My GitHub: github

Hello @mariano_rico ,

This is Keerthana Sathish. I would like to contribute to this project. I’m pursuing my B.Tech. in Artificial Intelligence and Data Science.
I have submitted a

  1. Research paper - ‘Synthesizing Data Analytics Towards Intelligent Enterprise’ in IEEE.
  2. Review paper - ‘A Survey on IoT Enabled Smart Vision using PIR Sensor’.

I’m also good in the theoretical knowledge of Natural Language Processing (NLP) and Deep Learning (DL). Please let me know whether I’ve to submit the proposal to you before submitting via the GSOC portal.

My LinkedIn:

Greetings to all!

My name is Joel Pardo, I am a versatile professional with expertise in data science, artificial intelligence research, and entrepreneurship. I earned my data science degree from the University of Valencia, Spain and am now in the process of obtaining a master’s degree in artificial intelligence from the Polytechnical University of Madrid. Currently, I am doing an scholarship in the Ontology Engineering Group. I continue to refine my expertise and actively participate in innovative projects.

My diverse background has allowed me to develop a unique skill set that I apply to a variety of innovative projects. Proficient in programming languages such as Python, R, and SQL, mainly. I have developed my skills in various projects, leveraging tools and frameworks like TensorFlow and PyTorch between others.

I’ve contributed in the creation of a contract summarization tool. This software enables users to quickly review and understand the key points of complex corporate contracts, saving them time and reducing potential misunderstandings. It is built using Python and PyTorch.

I really interested on this project. I believe my experience and passion for AI-driven solutions make me an ideal candidate to contribute to the Abstractive summarization model based on DBpedia.


Joel Pardo.