A place to cache linked articles (think custom and personal wayback machine)
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

преди 4 години
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240
  1. title: Uber Goes Unconventional: Using Driver Phones as a Backup Datacenter
  2. url: http://highscalability.com/blog/2015/9/21/uber-goes-unconventional-using-driver-phones-as-a-backup-dat.html
  3. hash_url: 27d1e42423963d4650201455ba67a068
  4. <p><img src="https://c2.staticflickr.com/6/5766/20975061403_7fa4e71579_m.jpg" alt="" align="RIGHT"/></p>
  5. <p dir="ltr">In <a href="http://highscalability.com/blog/2015/9/14/how-uber-scales-their-real-time-market-platform.html">How Uber Scales Their Real-Time Market Platform</a> one of the most intriguing hints was how Uber handles datacenter failovers using driver phones as an external distributed storage system for recovery.</p>
  6. <p dir="ltr">Now we know a lot more about how that system works from Uber's <a href="https://www.linkedin.com/pub/nikunj-aggarwal/20/878/3b4">Nikunj Aggarwal</a> and <a href="https://www.linkedin.com/in/joshuatcorbin">Joshua Corbin</a>, who gave a very interesting talk at the <a href="http://www.atscaleconference.com/">@Scale</a> conference: <a href="https://www.youtube.com/watch?v=0EhTOKcwRok">How Uber Uses your Phone as a Backup Datacenter</a>.</p>
  7. <p dir="ltr">Rather than use a traditional backend replication scheme where databases sync state between datacenters to achieve a measure of <a href="https://my.vertica.com/docs/5.0/HTML/Master/10730.htm">k-safety</a>, Uber did something different, what they do is store enough state on driver phones so that if a datacenter failover occurs trip information can not be lost on the failover.</p>
  8. <p dir="ltr">Why choose this approach? The traditional approach would be much simpler. I think it is to make sure the customer always has a <strong>good customer experience</strong> and losing trip information for an active trip would make for a horrible customer experience. </p>
  9. <p dir="ltr">By building their syncing strategy around the phone, even thought it's complicated and takes a lot work, Uber is able to preserve trip data and make for a seamless customer experience even on datacenter failures. And making the customer happy is what counts, especially in a market with <strong>near zero switching costs</strong>.</p>
  10. <p dir="ltr">So the goal is not to lose trip information, even on a datacenter failover. Using a traditional database replication strategy it would not be possible to make this guarantee for reasons that have parallels to how <a href="http://whatis.techtarget.com/definition/network-management-system">network management systems</a> have always had to work. Let me explain.</p>
  11. <p dir="ltr">In a network devices are the <strong>authoritative source for state information</strong> like packet errors, alarms, packets sent and received, and so on. The network management system is authoritative for configuration data like alarm thresholds and customer information. The complication is devices and the network management system are not always in contact, so they get out of sync because they work independently of each other. Which means on bootup, failover, and communication reconnection all this information has to be merged in both directions using a complicated dance that ensures correctness and consistency. </p>
  12. <p dir="ltr">Uber has the same problem, only the devices are smart phones and the authoritative state the phone contains is trip information. So on bootup, failover, and communication reconnection the trip information must be preserved because <strong>the phone is the authoritative source for trip information</strong>.</p>
  13. <p dir="ltr">Even when connectivity is lost the phone has an accurate record all trip data. So you wouldn't want to sync trip data from the datacenter down to the phone because that would wipe out the correct data on the phone. The correct information must come from the phone.</p>
  14. <p dir="ltr">Uber also takes another trick from network management systems. They periodically query phones to test the integrity of information in the datacenter. </p>
  15. <p dir="ltr">Let's see how they do it...</p>
  16. <h2 dir="ltr"><span>Motivation for Using Phones as Storage for Datacenter Failure</span></h2>
  17. <ul>
  18. <li dir="ltr">
  19. <p dir="ltr"><span>Not long ago a failed datacenter would cause customer trips to be lost. That’s now fixed. On a datacenter failure the customer is right back on their trip with almost no noticeable downtime.</span></p>
  20. </li>
  21. <li dir="ltr">
  22. <p dir="ltr">The request of a trip, offering it to driver, the acceptance of a trip, picking up the rider, and ending the trip are called <strong>state change transitions</strong>. A trip transaction lasts as long as the trip lasts.</p>
  23. </li>
  24. <li dir="ltr">
  25. <p dir="ltr"><span>From the moment a trip is started trip data is created in a back-end datacenter. It appears there’s a designated datacenter per city.</span></p>
  26. </li>
  27. <li dir="ltr">
  28. <p dir="ltr"><strong>The typical solution for datacenter failure</strong>: replicate data from the active datacenter to the backup datacenter. Well understood and works pretty well depending on your database. Drawbacks:</p>
  29. <ul>
  30. <li dir="ltr">
  31. <p dir="ltr"><span>Gets complicated beyond two backup datacenters. </span></p>
  32. </li>
  33. <li dir="ltr">
  34. <p dir="ltr"><span>Replication lag between datacenters.</span></p>
  35. </li>
  36. <li dir="ltr">
  37. <p dir="ltr"><span>It requires a constant high bandwidth between datacenters, especially if you have a database that doesn’t have good support for datacenter replication or if you haven’t tuned your business model to optimize deltas.</span></p>
  38. </li>
  39. <li dir="ltr">
  40. <p dir="ltr"><span>(A benefit that's not talked about, that probably doesn't matter for Uber, but may matter for smaller players, is that the driver phone plan is subsidizing bandwidth costs by not having to pay as much for inter-datacenter bandwidth.)</span></p>
  41. </li>
  42. </ul>
  43. </li>
  44. <li dir="ltr">
  45. <p dir="ltr"><strong>The creative application aware solution</strong>: since there’s constant communication with driver phones just save the data to driver phones. Advantages:</p>
  46. <ul>
  47. <li dir="ltr">
  48. <p dir="ltr"><span>Can failover to any datacenter.</span></p>
  49. </li>
  50. <li dir="ltr">
  51. <p dir="ltr"><span>Sidesteps the problem of a phone failing over to the wrong datacenter, which would cause all the trips to be lost.</span></p>
  52. </li>
  53. </ul>
  54. </li>
  55. <li dir="ltr">
  56. <p dir="ltr"><span>Using driver phones to hold datacenter backups requires a replication protocol.</span></p>
  57. <ul>
  58. <li dir="ltr">
  59. <p dir="ltr"><span>All the state transitions occur when communicating with the datacenter. There’s a Begin Trip or Begin Drive request, for example, which is the perfect opportunity to exchange state data with the phone and have a phone store data.</span></p>
  60. </li>
  61. <li dir="ltr">
  62. <p dir="ltr"><span>On a datacenter failover when the phone pings the new datacenter the trip data is requested off of the phone. The downtime is very minimal. (no information on how datacenter maps are handled).</span></p>
  63. </li>
  64. </ul>
  65. </li>
  66. <li dir="ltr">
  67. <p dir="ltr"><span>Challenges:</span></p>
  68. <ul>
  69. <li dir="ltr">
  70. <p dir="ltr"><span>Not all the saved trip information should be accessible to the driver. A trip has a lot of information on all the riders, for example, which should not be exposed. </span></p>
  71. </li>
  72. <li dir="ltr">
  73. <p dir="ltr"><span>Have to assume driver phones can be compromised which means the data must be made tamper proof. So all the data is encrypted on the phone. </span></p>
  74. </li>
  75. <li dir="ltr">
  76. <p dir="ltr"><span>Want to keep the replication protocol as simple as possible to make it easy to reason about and easy to debug.</span></p>
  77. </li>
  78. <li dir="ltr">
  79. <p dir="ltr"><span>Minimize extra bandwidth. With a phone based approach it’s possible to tune what data is serialized and what deltas are kept in order to minimize traffic over the mobile network.</span></p>
  80. </li>
  81. </ul>
  82. </li>
  83. <li dir="ltr">
  84. <p dir="ltr"><span>The Replication Protocol</span></p>
  85. <ul>
  86. <li dir="ltr">
  87. <p dir="ltr"><span>A simple key-value store model is used with get, set, delete, list of keys operations. </span></p>
  88. </li>
  89. <li dir="ltr">
  90. <p dir="ltr"><span>A key can only be set once to prevent accidental overwrites and out of order message problems. </span></p>
  91. </li>
  92. <li dir="ltr">
  93. <p dir="ltr"><span>With the set once rule versioning had to move into the key space. Updating a stored trip happens like: set(“trip1, version2”, “yyu”); delete(“trip1, version1”). The advantage is if there’s a failure between the set and delete there will be two values stored instead of nothing stored.</span></p>
  94. </li>
  95. <li dir="ltr">
  96. <p dir="ltr">Failover resolution just a matter of merging keys between the phone and the new datacenter by: comparing stored keys to any known ongoing trips for the driver; maybe sending one or more <em>get</em> operations to the phone for any missing data.</p>
  97. </li>
  98. </ul>
  99. </li>
  100. </ul>
  101. <h2 dir="ltr"><span>How They Got the Reliability of the System to Work at Scale</span></h2>
  102. <h3 dir="ltr"><span>Goals</span></h3>
  103. <ul>
  104. <li dir="ltr">
  105. <p dir="ltr"><strong>Ensure the system is non-blocking while still providing eventual consistency</strong>. Any back-end application in the system should be able to make progress, even when the system is down. The only tradeoff the application should be making is that it may take time for the data to be stored on the phone.</p>
  106. </li>
  107. <li dir="ltr">
  108. <p dir="ltr"><strong>Be able to move between datacenters without having to worry about the data already there</strong>. There needs to be a way to reconcile the data between the driver and the servers.</p>
  109. <ul>
  110. <li dir="ltr">
  111. <p dir="ltr"><span>When failing over to a datacenter that datacenter has a view of active drivers and trips, no service in the datacenter is aware a failure occurred.</span></p>
  112. </li>
  113. <li dir="ltr">
  114. <p dir="ltr"><span>On failing back to the original datacenter the driver and trip data is stale, which makes for a bad customer experience. </span></p>
  115. </li>
  116. </ul>
  117. </li>
  118. <li dir="ltr">
  119. <p dir="ltr"><strong>Make it testable</strong>. Datacenter failures are rare, so it’s typically a hard feature to test. They want to be able to constantly measure the success of the system so they can be confident a failover will succeed when it happens.</p>
  120. </li>
  121. </ul>
  122. <h3 dir="ltr"><span>The Flow</span></h3>
  123. <ul>
  124. <li dir="ltr">
  125. <p dir="ltr"><span>A driver makes an update/state change, for example, picking up a passenger. That update comes as a request to the Dispatch Service. </span></p>
  126. </li>
  127. <li dir="ltr">
  128. <p dir="ltr"><span>The Dispatch Service updates the trip model for the trip. The update is sent to the Replication Service. </span></p>
  129. </li>
  130. <li dir="ltr">
  131. <p dir="ltr"><span>The Replication Service queues the request and returns success. </span></p>
  132. </li>
  133. <li dir="ltr">
  134. <p dir="ltr"><span>The Dispatch Service updates its own datastore and returns success to the mobile client. Other data might be returned as well, for example, if it’s an Uber Pool trip another passenger might need to be picked up. </span></p>
  135. </li>
  136. <li dir="ltr">
  137. <p dir="ltr"><span>In the background the Replication Service encrypts the data and sends it to the Messaging Service. </span></p>
  138. </li>
  139. <li dir="ltr">
  140. <p dir="ltr"><span>The Messaging Service maintains a bidirectional channel with all drivers. This channel is separate from the original request channel that drivers use to communicate with services. This ensures normal business operations are not impacted by the backup process. </span></p>
  141. </li>
  142. <li dir="ltr">
  143. <p dir="ltr"><span>The Messenger Service sends the backup to the phone.</span></p>
  144. </li>
  145. <li dir="ltr">
  146. <p dir="ltr"><span>The benefits of this design:</span></p>
  147. <ul>
  148. <li dir="ltr">
  149. <p dir="ltr"><strong>Applications have been isolated from replication latencies and failures</strong>. The Replication Service returns immediately. And the application only has to make an a cheap call (within the same datacenter) to have the data replicated.</p>
  150. </li>
  151. <li dir="ltr">
  152. <p dir="ltr"><strong>The Messaging Service supports arbitrary querying of the phone without impacting normal business operations</strong>. The phone can be treated as a basic key-value store.</p>
  153. </li>
  154. </ul>
  155. </li>
  156. </ul>
  157. <h3 dir="ltr"><span>Moving Between Datacenters</span></h3>
  158. <ul>
  159. <li dir="ltr">
  160. <p dir="ltr">First approach was to<strong> manually run scripts on failover</strong> to clean up old states from the database. This approach had <strong>operational pain</strong> as someone had to do it. And since it’s possible to failover by city or multiple cities at a time, the scripts became way too complicated.</p>
  161. </li>
  162. <li dir="ltr">
  163. <p dir="ltr">Recall that keys in they key-value database contain a trip ID and a version number. The version number used to be an incrementing number. That was changed to a <strong>modified vector clock</strong>. Using the vector clock data on the <strong>phone can be compared to data on the server</strong>. Any causality violations can be detected and resolved. This solves the problem reconciling of in-progress trips.</p>
  164. </li>
  165. <li dir="ltr">
  166. <p dir="ltr">Traditionally completed trips were deleted from the phone so replication data would not grow without bounds. The problem is that when failing back to the original datacenter that datacenter will have stale data, which can cause scheduling anomalies. The fix is on trip completion a special <a href="https://en.wikipedia.org/wiki/Tombstone_(data_store)">tombstone</a> key is used. The version has a flag that says the trip has been completed. When the Replication Service sees the flag it can tell the Dispatch Service that the trip has completed.</p>
  167. </li>
  168. <li dir="ltr">
  169. <p dir="ltr"><span>Storing trip data is expensive because it’s a huge encrypted blob of JSON data. Completed trips require much less storage. A weeks worth of completed trips can be stored in the same space as one active trip.</span></p>
  170. </li>
  171. </ul>
  172. <h3 dir="ltr"><span>Ensuring 99.99% Reliability</span></h3>
  173. <ul>
  174. <li dir="ltr">
  175. <p dir="ltr"><span>The failover system constantly tested to establish the confidence that it works and that a failover will be successful.</span></p>
  176. </li>
  177. <li dir="ltr">
  178. <p dir="ltr">First approach was <strong>manual failovers of individual cities</strong>. Then look at the success rate of the restoration and debug problems by looking at the logs.</p>
  179. <ul>
  180. <li dir="ltr">
  181. <p dir="ltr"><strong>High operational pain</strong>. Performing this process manually every week didn’t work.</p>
  182. </li>
  183. <li dir="ltr">
  184. <p dir="ltr"><strong>Poor customer experience</strong>. Fares had to be adjusted for the few trips that did not restore correctly.</p>
  185. </li>
  186. <li dir="ltr">
  187. <p dir="ltr"><strong>Low coverage</strong>. Only a few cities at a time could be tested and since some problems hit only specific cities, perhaps because of a new city specific feature,  those bugs would be missed.</p>
  188. </li>
  189. <li dir="ltr">
  190. <p dir="ltr"><strong>No idea if a backup datacenter could handle the load</strong>. There’s a primary and backup datacenter. Even if they are configured the same how do you know the backup datacenter can handle the thundering herd problem, that is the flood of requests that occur on a failover.</p>
  191. </li>
  192. </ul>
  193. </li>
  194. <li dir="ltr">
  195. <p dir="ltr">To fix these problems they <strong>looked at the key concepts</strong> in the system they wanted to test.</p>
  196. <ul>
  197. <li dir="ltr">
  198. <p dir="ltr"><span><strong>Ensure all mutations in the Dispatching Service are actually stored on the phone</strong>. For example, a driver right after picking up a passenger may lose connectivity so replication data may not be sent to the phone immediately. Need to ensure the data eventually makes it to the phone.</span></p>
  199. </li>
  200. <li dir="ltr">
  201. <p dir="ltr"><strong>Ensure stored data can be used for replication</strong>. Are there any encryption/decryption issues, for example. Are their any problems merging in the backup data?</p>
  202. </li>
  203. <li dir="ltr">
  204. <p dir="ltr"><strong>Ensure the backup datacenter can handle the load</strong>.</p>
  205. </li>
  206. </ul>
  207. </li>
  208. <li dir="ltr">
  209. <p dir="ltr"><span>To monitor the health of the system a Monitoring Service was born.</span></p>
  210. <ul>
  211. <li dir="ltr">
  212. <p dir="ltr"><span>Every hour the service gets a list of all active drivers and trips from the dispatch service. For all drivers the Messaging Service is used to get the replication data.  </span></p>
  213. </li>
  214. <li dir="ltr">
  215. <p dir="ltr"><span>The data is then compared to see if the data is as expected. This yields a lot of good health metrics, like what percentage of failed.</span></p>
  216. </li>
  217. <li dir="ltr">
  218. <p dir="ltr"><span>Breaking down the metrics by region and app version was a big help in pinpointing problems.</span></p>
  219. </li>
  220. </ul>
  221. </li>
  222. <li dir="ltr">
  223. <p dir="ltr">A <strong>shadow restoration</strong> is used to test the backup datacenter.</p>
  224. <ul>
  225. <li dir="ltr">
  226. <p dir="ltr"><span>Data collected by the Monitoring Service is sent to the backup datacenter for a shadow restoration.</span></p>
  227. </li>
  228. <li dir="ltr">
  229. <p dir="ltr">The <strong>success rate</strong> is calculated by using the Dispatch Service to query and compare a snapshot from the primary datacenter to the number of active drivers and trips from the backup datacenter.  </p>
  230. </li>
  231. <li dir="ltr">
  232. <p dir="ltr"><span>Metrics around how well the backup datacenter handled the load are also calculated. </span></p>
  233. </li>
  234. <li dir="ltr">
  235. <p dir="ltr"><span>Any configuration issues in the backup datacenter can be caught by this approach.</span></p>
  236. </li>
  237. </ul>
  238. </li>
  239. </ul>