Stroll through time with the National Institute of Standards and Technology
The Evolution of Time Measurement through the Ages - A NIST Physical Measurement Laboratory Presentation:
Keep an eye out for special guest appearances by the Physical Measurements Laboratory (PML), Time and Frequency Division ;o) All content is available for reproduction, without any copyright restrictions, whatsoever.
K. Higgins, D. Miner, C.N. Smith, D.B. Sullivan (2004), A Walk Through Time version 1.2.1. Permalink: physics.nist.gov/time (2010). National Institute of Standards and Technology, Gaithersburg, MD.
NIST’s version history is a temporal wonder in its own right. This chronicle of document control begins with the paper brochure drafted in 1974, then the transition to online format in May 1995, and successive updates through November 2010.
* I tried to think of a clever phrase or pun for my sub-heading, using “meta”. It was quickly apparent that:
meta + fantastic = meta-tastic ~ > metastasize
I realized that any turn-of-phrase that, even vaguely, evoked an aggressive carcinoma was not the right choice.
New York City Transit Authority Graphics Standards Manual
I might as well go whole-hog with the typeface and graphics theme today. I recently read the background story about this. The New York Transit Authority’s Graphics Standards Manual had the lofty status of urban legend, except that it was real. It just wasn’t circulated, or sold, as the intent was for it to be used by those who needed it in the course of their work.
Anyway, someone finally got a copy of it, and uploaded it, page by page, online. That’s when the rant began. Supposedly, as I haven’t checked for myself, the scans were too small to actually read any of the content. The goal of whomever uploaded it was to make nice web page layouts, or perhaps to optimize page load time and resource usage or such. However, due to failure to allow sufficient detail, the content of each page remained just as inaccessible as it was prior to being made available online. Of course, that might not be correct. If it were, it would be a particularly scathing indictment of style over substance.
I use DOI’s all the time, for citations. They are so useful. I’m glad that ISO finally acknowledged DOI as a standard.
DOI prevents link rot!
Applications of the DOI system include (but are not limited to) managing information and documentation location and access; managing metadata; persistent unique identification of any form of any data…
We really, REALLY need better standards!
I’m not being critical of this chart by the former Aptimize (which was acquired by someone or other, Riverbed? Charles River? No, they make household linens… I think).
It is truly good that companies such as Aptimize exist, and kindly publish their work. It provides a frame of reference for internet performance benchmarking (and context over time). Even with apples-to-oranges comparisons as a hazard, well, I won’t belabor the point.
We now have Graphite and StatD or DStat thanks to Etsy, which is good for gauging the web, and websites. Yet even venerable Hurricane Electric, otherwise known as the so patriarchal sounding
(although they are mild, friendly, not-aggressive, not testosterone fueled people, which is not to say they are effeminate, which is not to say there is anything wrong per se with being effeminate… well, I think it has derogatory connotations… moving on now)
Anyway, Hurricane Electric published some charts a few days ago. It was general status stuff, what with IPv6 FOREVER right around the corner in June.
The charts seemed to indicate that
10% of ALL dot com websites are missing A records
Is that important?
I’m not sure, but I think it is. Of course, HE.net might have made an arithmetic or spreadsheet error…
I thought this was beautiful. I can so identify with the sense of accomplishment and satisfaction achieved through doing things this way.
via Infovore » Compliant
22 August 2004
Every now and then, as part of my job, I get to work on building websites… I’ve begun work on a new site… we’ve started from the ground-up. In this case, we’ve also gone with a web-standards based design.
And suddenly, I’m into the learning game, starting from scratch again, checking out resources and remembering what tag goes where. This is the kind of thing I love; knowing what result I want but having to remember how to do it, and thus having to research again, and hopefully drill the code deeper into my memory. The site is nearly done; basic layout, design choices, usability – is all there.
It’s been really satisfying to work on – seeing something come to fruition – and even though I’ve worked with standards-based design before, it was remarkable just how fast it all developed….
I recommend reading the entire post. It isn’t too long, nor uber-techy. And it IS a good website, from 2004 right on through to today!
Somewhat long-winded article about a very important topic: Consistent standards and definitions.
The next post in the series seems promising. The author says she will be describing an example, of the challenges faced by a company with dozens of “Employee ID” definitions. I will try to remember to post the link!
I love Google products, I am a happy user of Google+, but I don’t like this decision.
Corporate governance and capital structure are based on standards for a reason: To enable comparisons between similar things. Owning common stock has a well understood definition, in terms of rights, claim on assets etc. Basically, there are voting and dividend rights, but no seniority on assets in event of liquidation. Fine.
If Page, Brin, and a few other Google executives want more control, to be less accountable to shareholders, they can do so, using methods that are well understood by accountants, and by investors. Why not buy back a big chunk of the outstanding shares, the ones that have 1:1 (share:vote) rights? That would be more honest.
There are many ways of accomplishing the same thing, without the confusion of these “non-voting” shares. What’s the voting status after stock splits (now and in the future)? Same for employee stock incentive programs. Sounds like a mess.
Why own a non-voting class of stock anyway? What other caveats are there? This isn’t a good precedent. It decreases transparency of capital markets. The reason that the NYSE, NASDAQ etc have the preeminence they do is largely due to their transparency and liquidity. I don’t want us to head in the opposite direction, away from that.
Even as the founders continue a plan to sell some of their shares over the next three years through a program they enacted in 2009, they have developed a plan to retain an iron grip over Google.
This is a follow-up, written by me (full disclosure) to the recent New York Times Borderlines Opinionator post, The First Google Maps War.
On the hazards of crowd-sourced reference tools
This is problematic, whether open data e.g. OpenStreetMap, or not e.g. Google Maps with new commercial rate charge for high API usage.
Either OpenStreetMap or Google Maps is probably just fine for FourSquare users. But the same isn’t true for setting territorial boundaries in Asia, or South America. Nor for disputed street names in a small Vermont town, either.
Financial regulation assortment
Risk mitigation for European sovereign debt
Alea has embedded Twitter content very nicely on his Tumblr. I haven’t set it up properly here. Go have a look at Alea’s tumblr if you’re curious.
Risk mitigation for OTC Derivatives
Here’s something from the BIS (Bank for International Settlements), released on 24 January 2012:
Report on OTC derivatives data reporting and aggregation requirements, CPSS Publications No. 100.
CPSS = Committee on Payment and Settlement Systems.
I like settlements and clearing, as it is something I know about, more so than economic theory (that is a story for another day…).
Anyway, this is the final report on the OTC derivatives data that should be collected, stored and made available to all by trade repositories. It is part of the overall program by regulatory agencies, including the Financial Stability Board (FSB), to bring about OTC derivatives market reform.
Here’s a summary from the announcement (the entire 70+ page document is available for download, see link above):
By collecting such data centrally, [trade repositories] would provide authorities and the public with better and more timely information on OTC derivatives. This would make markets more transparent, help to prevent market abuse, and promote financial stability.
The report was expanded to elaborate on the description of possible options to address data gaps…and updated to reflect recent international developments in data reporting and aggregation requirements stemming from the Legal Entity Identifier (LEI)… in support of a request by the G20 at the Cannes Summit, to advance the development of a global LEI.
* Emphasis is mine.
The matter of LEI’s is interesting. At the moment, we have
- Bloomberg Open Symbology
- all the proprietary global financial security identifiers already extant e.g. ISIN, CUSIP, SEDOL
Don’t forget XBRL, extensible business reporting language, which is more of a framework than merely an identifier.
There is certainly a lot of activity promoting standardization. I would expect some sort of standardization of standards on the horizon, possibly after the Euro crisis is less of a crisis than it is now. I hope that will happen soon.