Kingsley, Paul, List.
As I said, I've been a mathematician from way back, and I consider designs that can be
translated to and from other formats without loss of info to be functionally identical.
But I also worked at IBM for 30 years, and I learned about all the pain and suffering
caused by revisions and upgrades and supposed equivalences that aren't.
As an extreme example, consider the IBM SAGE computer for the US Strategic Air Command in
the 1950s, which was intended to detect and monitor all aircraft flying over North
America. It weighed 250 tons and occupied one acre of floor space. See
https://www.ibm.com/history/sage#:~:text=SAGE%20remained%20in%20service%20b….
When I joined IBM in the 1960s, I saw the actual engineering model in operation in
Kingston, NY. It was used to test all software upgrades and fixes for the machines that
were deployed at various installations.
The software designs for SAGE in the 1950s were adapted to the airlines reservation system
for American Airlines in the 1960s, which ran on the IBM 7094. Later that was upgraded
for a System/360 version that included hotel reservations and car reservations. That
system was so successful that all software systems by all competitors world wide adopted
the same software conventions.
Today, any reservations that anybody makes world wide are based on extensions of the
design decisions that were made for that 250-ton monster of the 1950s. But the computer
chip in your cell phone today has vastly more computer speed and storage capacity.
However, the system design and data structures at each stage were upward compatible with
the previous versions.
Summary: The SAGE data structure designs are still buried deep inside the latest and
greatest reservation systems today. The high-level design decisions last forever, but
the details at the bottom change with every major upgrade.
General principle: Implementation details are temporary. Logic is forever.
John
----------------------------------------
From: "Kingsley Idehen' via ontolog-forum"
<ontolog-forum(a)googlegroups.com>
Hi Paul,
On 5/30/24 6:09 PM, Paul Tyson wrote:
John: thanks for the good explanation and your reasoned assessment of the W3C semantic
stack.
Kingsley: I agree with your points except regarding RDF/XML. Anyone who still is stuck on
this needs to read the RDF/XML spec and understand the design principles and limitations,
and then move on to other serializations that suit their needs and toolchains.
Yes, but my point is that W3C spec publication issues have left confusing items in the
public view that continue to either confuse people or reinforce long discarded confusion.
I would advise against adopting any RDF toolchain that did not at least read RDF/XML, and
preferably also write it, mainly to support integration with XML toolchain including XSLT,
xquery, xproc, etc, enabling full integration of XML document corpuses with RDF datasets.
Sorta.
Our Virtuoso platform still makes extensive use of RDF/XML, right now, but completely
outside the user's view.
For a baseline example, a few dozen lines of XSLT will transform any xml document into its
infoset RDF representation (in RDF/XML conveniently, slightly less so in another
notation). Marry that up to other domain semantic data sources, and you have with very
little effort created a linked data pool far more valuable than either the XML or the RDF
by itself. From there, the sky's the limit for expanding and exploiting your linked
data. (Not to say there aren't other ways to do this, but none so easy using
ready-to-hand tools and existing data sources.)
Yes, but that isn't the front-door of RDF i.e., its an implementation detail.
I generally discourage narratives that lead to the misconception that XML is a mandatory
requirement regarding RDF :)
Kingsley
Regards,
--Paul