GTFS-Realtime, maybe a little easier

GTFS-Realtime is a an efficient means for interchange of large amounts of data on the state of a public transport network. The combination of its efficiency and implementation result in a product that is difficult to understand for newcomers. This is made even worse by the fact that the documentation is intermixed with code.

This repository presents the same information (along with several extensions) as hypertext, which may be easier to understand. See that documentation here.

What you measure and what you seek to control

An excellent parable on the goals to set when looking to the future. In the coming years, this story should be on the minds of anyone working in the autonomous vehicle space.

Britain was set to dominate the jet age. In 1952, the de Havillands Comet began commercial service, triumphantly connecting London with the farthest reaches of the Empire. The jet plane was years ahead of any competitor, gorgeous to look at, and set new standards for comfort and quiet in the air. Then things went horribly wrong.

In 1953 a Comet fell out of the sky, and the crash was attributed to bad weather and pilot error. …In 1954, a second Comet fell out of clear skies near Rome. The fleet was grounded for two months while repairs were made. Flights then resumed with the declaration, ‘Although no definite reason for the accident has been established, modifications are being embodied to cover every possibility that imagination has suggested as a likely cause of the disaster. When these modifications are completed and have been satisfactorily flight tested, the Board sees no reason why passenger services should not be resumed.’ Four days after these words were written, a third Comet fell into the sea out of clear skies near Naples, and the fleet was grounded again indefinitely.
Continue reading “What you measure and what you seek to control”

Rant: Why the Age of a Technology Doesn’t Matter.

There have been a number of mentions on the Internets recently that the history of a technology has some impact on how that technology serves a need. But does it? I argue that linking the age or history of a technology with its usefullness is a fallacy. I’ll reference the awesome Your Logical Fallacy Is when calling out the problems in this argument.

I was first rankled by an article in Dissent on Bus Rapid Transit, which uses the Genetic logical fallacy to suggest that BRT benefits the petroleum/asphault industries, has been promoted by them, and is therefore bad. Jarrett Walker has a good response piece, which concludes:

The mixed motives that underlie BRT advocacy don’t tell us anything about where BRT makes sense, any more than the mixed motives behind rail advocacy do.

Soon after, TransitCenter made a blog post that mentioned the following:

New York’s MTA, the largest transit agency in the country, relies on fare payment technology invented in the 1960s.

There are a fair number of problems with the Metrocard system, and I’ll reference those at the tail of this post. But the fare payment technology invented in the 1960s is not the whole technology. The referenced technology (the magnetic stripe) is only a part of the system. This is a logical fallacy known as Composition/Division (https://yourlogicalfallacyis.com/composition-division). Truth be told, there are other technologies used in the Metrocard system that have been developed in every decade since– even the current one!

My greatest frustration with this particular claim is that the “new” technology that would replace the magnetic stripe, “Ticketless,” is itself more than a decade old. I first saw it in Italy in 2005.

http://web.archive.org/web/20051206033411/http://www.trenitalia.com/it/orari_biglietti/ticketless/index.html

“… You can receive a free SMS on your phone with the receipt of the transaction. Once aboard the train, it is enough to give the received code to the crew…”

You might say, “SMS? But now we have apps!” Well, in 2008, Trenitalia released an App for mobile tickets

So, the next generation of technology to be deployed is a decade old… oh noes!

Not to be outdone, this quote from New York’s new Streetcar Czar raised the same ire.

“The subway was a 20th-century technology. Streetcars are a 21st-century technology, which is why all the fastest-growing cities in Asia and the Middle East are all looking at them.”

The glaring problem with that is that electrically powered streetcars running on the street actually predate the Subway by a decade. Oh, and both Streetcars and Subways both use steel wheels over steel rails, a combination developed in the 1860s.

The real questions to ask regarding technology’s continued usefulness:
1. Is it easily maintained?
2. Does it actually serve the needs placed upon it?

Or, as Jeff Atwood (https://blog.codinghorror.com/the-magpie-developer/) put it:

Don’t feel inadequate if you aren’t lining your nest with the shiniest, newest things possible. Who cares what technology you use, as long as it works, and both you and your users are happy with it?

For the Metrocard, the former is definitely the case– crucial components of the system are tied to not-easily-replaceable equipment. A strong case can be made for the latter, as it does place a limit on capacity on buses and at crowded subway stations.

Digital Audio on Analog Visual Information

London Reconnections, a London-based magazine and site, has a Podcast, the second episode of which is on Transit Maps. I know that audio on a the impact of a strictly visual medium sounds odd, but it’s worth a listen. To hear it, check this link or go below the fold.

There is no ISO standard for transit maps.

Continue reading “Digital Audio on Analog Visual Information”

Why TCIP sucks (and what can be done to fix it)

In the past few weeks, I’ve gotten a number of questions from a variety of sources about TCIP. Transit Communication Interface Profiles (TCIP) is ‘the FTA and APTA’s decades-old project that includes specifications for all manner of technology systems in the transit industry.’

If you want to stop reading now, just take my word that it’s not that great and rarely used. If you’re interested in learning why, and what I think should be done about it, read on.

Continue reading “Why TCIP sucks (and what can be done to fix it)”

Stats and Common Sense

Via O’Reilly, a good short introduction statistical thinking for real world situations, and how to describe results without being misleading (hello, Swyft!).

Common sense tells us that 2 + 2 will always be 4. We can compile that code and run 2 + 2 over and over and the answer will always be 4. But when we try to measure some phenomenon in the real world it’s often impossible to measure everything, so we end up with some slice or sample and we have to accept an approximation. In other words, when observing 2 + 2 in a huge and complex system, it rarely adds up to precisely 4.

 

Consumers and producers of popular research each have a role to play. Consumers should expect a level of transparency, and researchers should work hard to earn both the trust and attention of the reader. A good popular report should:

State how much data is being analyzed: Using statements like “hundreds of data compromises” isn’t that helpful since the difference in strength between 200 and 900 samples is quite stark. Though it’s much worse when the sample size isn’t even mentioned, and this is a deal breaker. Popular research should discuss how much data was collected.

Describe the data collection effort: Researchers should not hide where their data comes from nor should they be afraid of discussing all the possible bias in their data. This is like a swimmer not wanting to discuss getting wet. Every data sample will have some bias, but it’s exactly because of this that we should welcome every sample we can get and have a dialogue about the perspective being represented.

Define terms and categorization methods: Even common terms like event, incident and breach may have drastically different meaning for different readers. Researchers need to be sure they aren’t creating confusion assuming the reader understands what they’re thinking.

Be honest and helpful: Researchers should remember that many readers will take the results they publish to heart: decisions will be made, driving time and money spent. Treat that power with the responsibility it deserves. Consumers would do well to engage the researchers and reach out with questions. One of the best experiences for a researcher is engaging with a reader who is both excited and willing to talk about the work and hopefully even make it better.

Finally, even though we are really good at public shaming and it’s so much easier to tear down than it is to build up, we need to encourage popular research because even though a research paper has bias from convenience sampling or doesn’t match up with the perspective you’ve been working with, it’s okay. Our ability to learn and improve is not going to come from any one research effort. Instead the strength in research comes from all of the samples taken together. So get out there, publish your data, share your research, and and celebrate the complexity.