RideKick Electric Powered Bike TrailerBike Bag Shop -- Grocery, Shopping, Market PanniersUtility Cycling - Use Your BicycleBanjo Brothers Affordable Cycling GearOrtlieb Bike Bags & PanniersXtracycle Bike Cargo Kits, Parts and AccessoriesMiiR Bottles one4oneCommuter Bike Store Breezer Greenway DX Hybrid Bike 24 Speed - 2011 ModelBionX: Electrify Your BikeChrome Bike Backpacks and Messenger BagsCygoLite Bike Lights: Engineered to ShinePlanet Bike: Better bike products for a better world

A Thought Experiment on Bike Commuting Safety

by Ted Johnson

Josh King’s guest post, “10 Rules for Urban Commuting” sparked a lively debate, not just in our comments, but across the blogosphere. Well, the bike-commuter corner of the blogosphere anyway.

If you missed it, the essence of the article (if I should hazard a summary) is this: If safety and survival are paramount to an urban bike commuter, then laws and civility are secondary niceties. King was praised as well as criticized for his style and substance–and people reacted strongly to both.

Let’s Go Ride a Bike posted a thoughtful and passionate critique of King’s article, and it’s “macho tone.” This article drew more comments than King’s original article. (And yes, we’re a little jealous about it.)

Here at our world headquarters, we’ve had multiple discussions about King’s 10 rules (plus one), as well as the reaction to it. One of those discussions developed into this thought experiment.

Thought Experiment: The Bike Commuting Robot

Think of the most fragile and precious thing in your life that could be carried on a bike. It could be your child, your cat, or your Tom Selleck collector plate. This precious thing is to be carried along your regular bike commute, during the time of day with the busiest automobile traffic, on a bike operated by a robot.

Trust me.

For example, you can tell the robot to:

  • Obey all traffic laws, with no exceptions.
  • Obey all traffic laws, except for certain nonsensical ones.
  • Obey all traffic laws unless certain situations arise.
  • Be indifferent to traffic laws, but stay upright and avoid collisions.

And in case you are looking for loopholes: The robot looks like a human (so it won’t draw attention of anti-robot militias). It’s also is no stronger or faster than you, and has the same reaction time as you. Like you, it can usually tell the difference between a person, a dog, a car, a mailbox, etc. No, it can’t fly. It’s unarmed. It can’t turn invisible, transform itself into a tank, a sofa, or anything else. Enough with the looking for loopholes! Suffice it to say that the robot has the physical abilities of a human, but it will behave exactly as you tell it to behave, with no judgment.

Oh, and you will be liable for any damage or injury caused by the robot.

Your precious thing is loaded, and the robot awaits your instructions.

 
Burley nomad 269

20 Responses to “A Thought Experiment on Bike Commuting Safety”

  1. Josh King says:

    Easy and obvious answer – thank you for proving my point.

    BTW, my “10 Rules” were inspired by a couple of riders I regularly see on my commute who are zealots about traffic rules compliance but regularly do unsafe things like hug the door line.

  2. rich says:

    Kind of a stupid experiment. Imagine that all the cars are also driven by robots. How would you program them? When you come right down to it, rules give us guidance for general behavior. Judgment is what really matters when situations arise.

    • Ted Johnson says:

      @rich Cars driven by robots are not part of this thought experiment. [Sigh]

      Would you say then that in most situations, obeying the law is a substitute for judgment? Would you say that in some situations obeying the law would be a demonstration of poor judgment?

      On my daily commute, I ride on a sidewalk for about 50 yards, against the flow of traffic on a one-way street. If my robot weren’t allowed to do this, it would have to negotiate three extra traffic lights, two extra left turns, and spend an extra half mile mingling with cars. In which scenario is a situation more likely to arise where critical human judgment would make the difference for a safe arrival?

  3. It rather depends on the programming of the robot drivers.

  4. Hippiebrian says:

    If it were programmed to obey all traffic laws unless certain situations arise, we could ignore being indifferent, still arrive safe, and not tick off any cage drivers. Easy.

  5. harl says:

    Traffic rules provide some degree of predictability, which allows us to avoid total traffic chaos. Generally, they should be followed if the goal is safety.

    However, not every driver follows them and we have to watch for them, which can lead to “innovative” solutions for traffic rules (I have one very poorly designed freeway off-ramp to negotiate that puts cars at near freeway speeds directly into the right lane of an arterial with only a merging sign for protection; this requires lots of caution and some creativity). At other times, blindly following rules can lead to dangerous situations (waiting in the middle of a lane close to a truck can put us in the proverbial blind spot, making it more reasonable to edge close to an adjacent lane). So, special situations require us to adapt to stay safe and violate some traffic rules.

    As I watch my fellow bicyclists, I would say most of the “traffic violations” have nothing to do with safety, but rather they are for expediency–the classic being running red lights. Most of the situations I see are done within margins of safety (not for the purposes of being safe) and cause little direct harm. They do, however, lead to less patience for and acceptance of cyclists on the part of some drivers who see these cases as excuses for some pretty bone-headed attempts to teach me a lesson–I have had my share of honks, near misses, fingers, and other assorted rudeness after 40 years of commuting.

    At times, it seems like a losing battle but I ride within the rules both in an attempt to be safe and help improve the non-cyclists view of cycling.

  6. Josh Lipton says:

    Most reasonable, law-abiding citizens would program the computer based on their perception of the preciousness and fragility of the bike’s passenger or cargo and the inherent level of risk of the bike commuting trip they were being sent on.

    The perception of fragility and risk are difficult to quantify in real world situations. If our robot bike commuter somehow was able to accurately calculate risk levels, quantified risk choices could be made.

    The debate of how to balance safety with strictly following traffic laws seem to get mired down in the difficulty of being able to quantify risky behavior. Most cyclist perceive that they’ve figured out the safest way to handle the risks of the road. Naturally, cyclists who take a different approach to safety seem to be making a riskier choice.

    If data could be collected that really revealed the full picture of cyclist’s behavior and the associated risk factors in a broad variety of situations, the uproar in this side of the debate would be somewhat reduced to a discussion of how accurate the data really was along with a continued but more boiled down discussion of the ethics behind bike commuters choices to balance personal risk with a strict adherence to traffic laws.

  7. Seville says:

    I would have the robot abide by traffic laws except in certain circumstances. The circumstances would of course primarily have to do with reducing exposure to dangers and minimizing risks, and therefore improving prospects for the precious thing arriving at the destination in one piece.

    The robot would be programmed to constantly assess the situation around it and take measures as necessary to protect the precious thing, even at the expense of traffic law if neccessary. No law is a good law if it means the precious thing may soon end up under a bus.

  8. Jonathan says:

    Ted, I think you’re missing the point with the robot.

    First, because you’re reducing the activity of bicycling to an overly simplified “moving stuff from point A to point B” concept.

    Second, the robot, unlike myself, has tireless patience for accelerating away from a light, then stopping for the next one.

    Third, the robot has no inherent sense of pleasure in being a robot or doing robot things, like cycling.

    Fourth, the robot is oblivious to automobile exhaust or any of the other side effects of proximity to motor vehicles, like loud music, dogs with heads out windows, limbs out of windows, litterbugs, etc.

    The best way for me to make sure that I arrive somewhere in one piece is to take the bus (or just make a phone call). The fact that I choose instead to travel by bicycle, I think, indicates that I have more on my mind than abject self-preservation.

    By the way, nice post Josh (8:03 am)!

  9. This is a great thought experiment. I would chose to program the robot to follow the traffic laws except in the case of imminent safety threats in which a reasonable person would break the law to protect the child on board.

    If that programming accomplished its goal on the streets, then I would offer the robot’s services, along with an excellent reference, as a pedi-cab chauffeur for free to all bicycle traffic anarchists, in an effort to reduce the cyclist-induced mayhem on the streets.

  10. Ted Johnson says:

    I find it interesting that, among those who are participating in the thought experiment, many are saying that the robot should obey traffic laws except or unless certain dangerous situations arise. I’d like to follow up on @Josh Lipton’s comment, “If our robot bike commuter somehow was able to accurately calculate risk levels, quantified risk choices could be made.”

    Okay. Suppose that the robot were completely indifferent to traffic laws, but could be programmed with data on how motorists and pedestrians are likely to behave in various situations, including how they are likely to respond to a cyclist who uses hand signals, claims the lane, runs a light, etc. All this data could be included in the robots real-time risk assessment.

    Would this robot obey, de-facto, traffic laws most of the time?

    Are there situations where obeying the law increases exposure to risks?

    As I said earlier (in a response to @rich), my commute contains a 50-yard lawbreaking run, which saves me time, decreases my overall exposure to traffic, and eliminates two left turns through intersections. I wonder what a risk-assessing robot would do?

    My commute could be made 100% legal if I were willing to dismount and push the bike through this short stretch, but that would erase the time savings. Since this is my thought experiment, I’m going to say that the robot can’t dismount and push the bike.

  11. peteathome says:

    It’s a shame that that “10 rules for urban cycling” was allowed to be posted in this wonderful blog.

    His first rule of never slowing down or stopping for lights shows an incredible ignorance of what actually gets bicyclists injured or killed.

    I hope noone took his advice and got themselves hit by a car.

    • Ted Johnson says:

      What King wrote was, “slow down, look carefully and keep moving if the way is clear.” Whether or not you think that is good advice, it’s quite different–the near opposite in fact–from saying “never slow down or stop for lights.”

  12. peteathome says:

    The robot experiement concept is, in my opinion, an invalid way of looking at things. A robot can be always hyperalert, constantly evaluating, without fatigue, all inputs, and constantly make nanosecond decisions in avoiding collisions. So how I would program a robot is entirely different from how I would bike as a human. That’s one of the reasons I object to the tone (and the rules) of “10 rules”. it’s turning bicycling into an adrenaline sport. While I occasionally enjoy such things, I don’t want my daily commute to be a trill sport.

    The ideal way to bicycle is to learn an approach that minimizes conflicts with other vehicles. That way, if a bicyclist or motorist makes an error, as we humans do from time to time, the probability of a collision is low.

    By minimizing conflicts, it also makes bicycling a more relaxed, enjoyable experience. That’s what vehicular bicycling is all about. It’s not about obeying some arbitrary set of rules. It’s about internalizing an approach to traffic to the point you don’t have to think about interactions most of the time.

    It’s about making bicycling a safe, efficient and pleasurable activity, where you can spend your time enjoying the beauriful day from your bike, rather than spending every second dodging traffic.

  13. Ted Johnson says:

    So, how would you program the robot?

  14. peteathome says:

    Sorry for the misquote – the page will no longer load, for me at least. I found the posting elsewhere and here is “the rule” I object to that I think will get people killed:
    “A moving bike is a safer bike, as momentum allows you to skirt obstacles and avoid danger from any direction. Sitting motionless in the road at a stop sign or light, a cyclist is at his or her most vulnerable” Followed by the above advice to “carefully” run stop signs and lights.

    I’m sorry, but this is the most utter nonsense I have ever seen. It is not based on any science or study that I know of but is apparently a random thought our of the bloggers head. Please don’t utter such things on a blog read by people trying to learn how to safely ride a bike.

  15. BluesCat says:

    My programming for the robot would be REALLY simple: maintain a safe bike speed for the road conditions and the weather, altering speed and/or direction ONLY when the proximity of other road occupants dictates that maintaining the present speed and/or course will lead to a collision.

    Notice: I didn’t say anything about traffic laws. Man-made traffic laws can’t hold a candle to innate common sense.

  16. BluesCat says:

    Oh, BTW, great experiment, Ted!

  17. peteathome says:

    Are you asking me how a person would safely ride or how a robot would do it? As i said above, I find the thought experiment useless from understanding how a person would do it because we are not robots and our processing is totally different.

    If you really want to know how I’d do it for a robot I would use much of the same software we used for Darpa’s “Urban Challenge”, the autonomous vehicle playoff. They actually DO follow all the rules of the road since you would be causing massive mayhem if you didn’t . But assuming I don’t give a fig about what happens to other road users, I might modify the software a bit – go ahead and cause other collisions if it would avoid me getting into one. But I have to admit I don’t see how I would at this point. The cars have collision detection systems that they already use to maneuver out of potential collisions. I suppose if I was programming for a skinny bicycle that also had really high robotic acceleration I might try to maneuver between two approaching cars, etc. as a last ditch avoidence method.

    As for “programming” flesh and blood bicycle riding humans, I would recommend any fo the well-written articles and books on vehicular cycling.

  18. Seville says:

    Let’s say the robot is programmed with the ultimate in situational awareness and the ability to differentiate threats from non-threats and to quantify probability of accidents as well as accurately determine consequences. The robot would be able to assess all risks presented along the route and manage them. All you need to do is dial in your risk tolerance threshold. Because you are transporting the precious thing, you may select a somewhat low tolerance for risk. But not too low, because you want the robot to arrive within an hour at the destination, not in a week. My guess is the robot would depart from traffic laws/rules fairly often along its route because these generic “one size fits all” requirements would often be interpreted as arbitrary and inconsistent with respect to managing of actual risks encountered. Similarly the robot may disappoint a few motorists as its risk management actions may cause some temporary inconvenience to others.

    It’s possible that you could dial in some form of “cultural bias” with a knob. Let’s put it on the robot’s forehead just for fun. Depending on how far you turn the knob this would introduce a degree of conformance to laws, rules, and social norms, wishes of others, etc. Turn up the knob just a little and the robot will trade off low level risks in favor of conformance. Turn it up too far and the robot begins to ignore threats and active management of risks, and instead conforms to a generic scheme of orderliness that may or may not address actual risks associated with specific circumstances.

Leave a Reply