|
|
Question Answering with Hugging Face Transformers
Author: Venkata Sudhakar
Question answering (QA) models extract answers to questions directly from a given text passage. This is called extractive QA, where the model identifies the span of text in the context that best answers the question. Hugging Face provides a question-answering pipeline backed by models like deepset/roberta-base-squad2 that is fine-tuned on the SQuAD2 dataset. At ShopMax India, QA models help customers get instant answers from product manuals and FAQ documents without reading the entire document. The question-answering pipeline takes two inputs: the question and the context passage. It returns the answer text along with a confidence score and the character positions of the answer in the context. This makes it straightforward to highlight the answer in the original text if needed in a UI. The below example shows how to extract answers from a product FAQ document using the Hugging Face question-answering pipeline.
It gives the following output,
Q: How long is the warranty for the laptop?
A: 1-year (confidence: 0.89)
Q: Is accidental damage covered?
A: not covered (confidence: 0.76)
Q: What is the cost of the extended warranty?
A: Rs 3,500 (confidence: 0.94)
Q: What documents are needed for a warranty claim?
A: original purchase invoice and the product serial number
(confidence: 0.91)
The confidence score reflects how certain the model is that the extracted span answers the question. Scores above 0.7 are generally reliable for production use. When the score is low, ShopMax India can fall back to showing customers a link to the full FAQ page rather than displaying a potentially incorrect answer. For multi-document QA, each document can be queried separately and the result with the highest confidence score can be returned as the final answer.
|
|