Digital Audio on Analog Visual Information

London Reconnections, a London-based magazine and site, has a Podcast, the second episode of which is on Transit Maps. I know that audio on a the impact of a strictly visual medium sounds odd, but it’s worth a listen. To hear it, check this link or go below the fold.

There is no ISO standard for transit maps.

Continue reading “Digital Audio on Analog Visual Information”

Why TCIP sucks (and what can be done to fix it)

In the past few weeks, I’ve gotten a number of questions from a variety of sources about TCIP. Transit Communication Interface Profiles (TCIP) is ‘the FTA and APTA’s decades-old project that includes specifications for all manner of technology systems in the transit industry.’

If you want to stop reading now, just take my word that it’s not that great and rarely used. If you’re interested in learning why, and what I think should be done about it, read on.

Continue reading “Why TCIP sucks (and what can be done to fix it)”

Stats and Common Sense

Via O’Reilly, a good short introduction statistical thinking for real world situations, and how to describe results without being misleading (hello, Swyft!).

Common sense tells us that 2 + 2 will always be 4. We can compile that code and run 2 + 2 over and over and the answer will always be 4. But when we try to measure some phenomenon in the real world it’s often impossible to measure everything, so we end up with some slice or sample and we have to accept an approximation. In other words, when observing 2 + 2 in a huge and complex system, it rarely adds up to precisely 4.

 

Consumers and producers of popular research each have a role to play. Consumers should expect a level of transparency, and researchers should work hard to earn both the trust and attention of the reader. A good popular report should:

State how much data is being analyzed: Using statements like “hundreds of data compromises” isn’t that helpful since the difference in strength between 200 and 900 samples is quite stark. Though it’s much worse when the sample size isn’t even mentioned, and this is a deal breaker. Popular research should discuss how much data was collected.

Describe the data collection effort: Researchers should not hide where their data comes from nor should they be afraid of discussing all the possible bias in their data. This is like a swimmer not wanting to discuss getting wet. Every data sample will have some bias, but it’s exactly because of this that we should welcome every sample we can get and have a dialogue about the perspective being represented.

Define terms and categorization methods: Even common terms like event, incident and breach may have drastically different meaning for different readers. Researchers need to be sure they aren’t creating confusion assuming the reader understands what they’re thinking.

Be honest and helpful: Researchers should remember that many readers will take the results they publish to heart: decisions will be made, driving time and money spent. Treat that power with the responsibility it deserves. Consumers would do well to engage the researchers and reach out with questions. One of the best experiences for a researcher is engaging with a reader who is both excited and willing to talk about the work and hopefully even make it better.

Finally, even though we are really good at public shaming and it’s so much easier to tear down than it is to build up, we need to encourage popular research because even though a research paper has bias from convenience sampling or doesn’t match up with the perspective you’ve been working with, it’s okay. Our ability to learn and improve is not going to come from any one research effort. Instead the strength in research comes from all of the samples taken together. So get out there, publish your data, share your research, and and celebrate the complexity.

 

Worth a Read: The Unreliable Bus

Four posts, explaining in near-laymen’s terms, the factors that affect service reliability. The author goes into more detail than I did on the subject.

In this first post, I am going to try and convince you that while on the surface this complaint seems simple enough, addressing it is a devilishly tricky problem. After we understand what the problem actually is, we can start looking at ways to fix it.

Background Nerd Reading: On Predictions

This paper goes into detail about the most common arrival prediction algorithm– using the published schedule. This algorithm is not perfect, but it is a substantial improvement on the schedule alone:

This scheme was found to systematically underestimate the remaining waiting time by 6.2{8472c33f139a04d7902a1525cca677786370fef6b48c8e38f5cec86fa878d628} on average. The provision of real-time information yields a waiting time estimate that is more than twice closer to the actual waiting times than the timetable is. This difference in waiting time expectations is equivalent to 30{8472c33f139a04d7902a1525cca677786370fef6b48c8e38f5cec86fa878d628} of the average waiting time.

Oded Cats* & Gerasimos Loutos. Real-Time Bus Arrival Information System: An Empirical Evaluation. Journal of Intelligent Transportation Systems, 2015

On the emergence of a Global ITS Architecture

This is the second in a series of posts on what I’m calling the Global ITS Architecture. The first gave background on the National ITS Architecture.

Below the fold, edited notes from a talk I gave last year on “Have Open Standards Delivered ITS Architectures that Matter?” Hint: I argue Yes.

Note that I engage in some of the lies of storytelling, duly noted.


I’m going to take a strong stand on this question and make three arguments on my way to an answer:

  1. The evolution of ITS has seen the scale of information distribution grow over time.
  2. The private sector has really driven the state of the art for passenger information.
  3. In this space, the work put into National ITS standards have not had good ROIs.

First, a bit of history. Electronic passenger information is not something of the 21st century. The first subway in America, Boston, had electronic passenger information signs in the 1890s.

imageBut both the provision and reach of information was severely limited. Driving this was someone behind a switchboard.

Fast forward to the end of the 20th century. We have systems that make passenger information available at a city-wide scale. Here is one such sign.

helsinki_sign

While the data were available systemwide, in the beginning it was only available to the agency and through this manufacturer’s product(( Lie of omission: HSL in Helsinki is an awesome data provider)). The only reason we know what the interface for these signs looks like is because of a Finnish hacker. She was playing around with a spectrum analyzer, found something interesting and decided to take a look.

Growing Pains

At the dawn of the 21st century, after deploying a host of single vendor solutions, the industry began to understand the downsides of end-to-end functionality from one vendor. from a panel of experts on passenger information and, after a standard standards-development process, TCIP was born, good, bad, and ugly.

  • Good: Standards Process
  • Bad:
    • No major implementations
    • Backward looking: Year 2000 technology (Not thought through for web / mobile clients)
  • Ugly: Documentation “cumbersome at best, impossible at worst”

Because of these issues, and no one has used the Passenger Information messages at large scale. No one took this standard as an example and said, “hey I want to build a system with that.”

Contrast this with the NTCIP standards, where the documentation is succinct and accessible.

Meanwhile…

The tech sector has taken a separate track.

In 2006 Google finalized the first version of GTFS. This was part of moving from a search company, to a data company.
Rather than trying to understand national ITS diagram,  they addressed this as a tech company would, in an agile fashion with a minimum viable product first. They worked with one agency, TriMet in Portland, to iterate a working product before putting it out.
Put simply, GTFS proved that without a flagship project, a standard is worthless.

Google saw world of opportunities for data, and they have driven the market ever since. But will they continue to in the future?

I’ll address that in in part 2.

Primer: The National ITS Architecture

This is background material for a subsequent post.

The [US] National ITS Architecture provides a common framework for planning, defining, and integrating intelligent transportation systems. It is a mature product that reflects the contributions of a broad cross-section of the ITS community (transportation practitioners, systems engineers, system developers, technology specialists, consultants, etc.).

The architecture defines:

  • The functions (e.g., gather traffic information or request a route) that are required for ITS
  • The physical entities or subsystems where these functions reside (e.g., the field or the vehicle).
  • The information flows and data flows that connect these functions and physical subsystems together into an integrated system.

(Iteris)

Basically, the National ITS architecture brings taxonomy to Intelligent Transportation Systems, all through the confusing lens of the typical American engineer (c.f. the usage of the ackronym ISP). It looks like this:

The National ITS Architecture
The National ITS Architecture

The ne-plus ultra of the National ITS Architecture is the connected vehicle. Below the fold, a shiny video.


Personally, I remain cynical of the next big thing.

For more on the National ITS Architecture (and a reminder that slides make bad documents), see here. If you need bedtime material, there is a ~300 page theory of operation from USDOT.