Skepticism toward Wikipedia articles is warranted for several reasons, notably the potential for outdated information. While human editors strive to maintain accuracy, they have limitations. Bots can assist with basic edits or address vandalism, but a more sophisticated solution may be on the horizon from MIT.
Researchers at MIT have developed an AI system that automatically updates outdated sentences in Wikipedia articles while preserving a human-like tone. This advanced machine learning system can seamlessly integrate rewritten content into existing paragraphs.
The AI is trained to identify discrepancies between the original Wikipedia sentence and sentences containing updated facts. When contradictions arise, it employs a "neutrality masker" to pinpoint which words to remove and which must remain. Following this, an encoder-decoder framework reformulates the original sentence based on simplified representations of both versions.
This technology also has the potential to enhance datasets for training fake news detectors, thereby reducing bias and increasing accuracy. However, the system is not yet fully refined. Human evaluations have given it average scores of 4 out of 5 for factual updates and 3.85 out of 5 for grammar—superior to many existing text generation systems, though still noticeable in its differences.
With further improvements, this AI could become a valuable tool for making minor edits to Wikipedia, news articles, or other documents when human editors are unavailable.