Wednesday, July 17, 2019

Delving into Renault's new API

(Geek level: High - you have been warned!)

I've been driving a Renault Zoe for over a year now. It's a great car, but the companion mobile application - allowing you to turn on the heater or air conditioning remotely, or to set a charge schedule to make the most of cheap overnight electricity - has been lacklustre at best.

At the tail end of last year, Renault decided to push an update to the ZE Services app that effectively removed the "app" part, instead redirecting users to their website (which works even more poorly on mobile devices). Renault promised a new "MY Renault application [with] an improved interface with new and easy-to-use services".

Nearly eight months later, still no sign of the new "MY Renault" app in the UK, but some countries on the continent have it in their hands. I decided to take a look and see how different the new API was to the previous one, and how much work I'd have to do to update my charge scheduling script (it takes half-hourly price data from Octopus, works out the cheapest window in which to charge overnight, and schedules the car to charge in that window.)

I'm not going to spend any time looking at registering for MY Renault; it's boring, and I went through the process in French, so all the following assumes you have a MY Renault account. I'm going to focus on the area of most interest to me: functions to interact with the electric-vehicle-specific parts of the API.

The first time I introduce a named piece of data, I'll make it bold so it's easier to skim back to. Where a parameter needs substituting in, it'll be in {italics, and probably monospace}. There are lots of scattered bits of information you'll need to pull together!

Update: This post has been updated with corrections from kind folk in the comments.

Logging in

Authentication has now been parcelled out to Israeli SAP subsidiary Gigya, who have extensive API documentation available online. The first thing to note is that you'll need the correct Gigya API key - this is embedded in the MY Renault app's configuration. Once you have that, you can log in by POSTing your MY Renault credentials to the appropriate endpoint. This will yield your Gigya session key (returned as sessionInfo.cookieValue). It's not clear when, or even if, this session key expires, so keep hold of it - you're going to need it a lot.

Once you've logged in, you'll need to extract a couple more pieces of information from the Gigya API before you can start to talk to Renault's servers. The first is your person ID, which ties your Gigya account to your MY Renault one (or specifically, ties you to a set of MY Renault accounts). You'll need use your Gigya session key as the oauth_token field value to pull the person ID from the Gigya accounts.getAccountInfo endpoint, but it's a fair bet the value won't change for a particular user.

You'll then need to request your Gigya JWT token from accounts.getJWT, again using your Gigya session key as the oauth_token, and you need to include data.personId,data.gigyaDataCenter as the value of fields - Renault's servers need that data to be in the token. It looks like you can pick the expiry of your choice here - Renault's app uses 900 seconds. When this token expires, you'll need to hit this endpoint again to get a new one.

OK, we're done talking to Gigya, now we can start the second part of the authentication process - this time with Renault's servers. Or, more precisely, the Nissan-Renault Alliance's shared platform Kamereon. Here, you'll need a Kamereon API key - again, this is embedded in the MY Renault app. The root URL for this API is https://api-wired-prod-1-euw1.wrd-aws.com/commerce/v1.

You don't yet have your Kamereon account ID, so you'll need to get it using your person ID from earlier. You'll need to pass your Gigya JWT token in the x-gigya-id_token header (note the funky mix of hyphens and underscore), and the Kamereon API key in the apikey header, in a GET request to the endpoint at persons/{person ID}?country=XX, inserting your person ID and two-letter country code. I'm not sure that the latter makes the slightest bit of difference, as I've tried exchanging FR for GB and not seen any effects, but the whole thing blows up if it's not there.

Looking at the data returned from that endpoint, you'll notice that it contains an array of accounts, not just a single account. I'm not sure in which scenarios one might have multiple accounts (multiple cars can be added to a single account), but it looks like it's possible. Not for me, though, since there was only a single account here; the value of accountId is what we'll need to go forward.

We're not done yet! The last thing you'll need to start pulling data is a Kamereon token. These are short-lived and are obtained from the endpoint at /accounts/{Kamereon account ID}/kamereon/token?country=XX, again with the apikey and x-gigya-id_token headers. The one you want is the accessToken. I've not looked into using the refreshToken - you might as well just repeat this request when the token expires - and the idToken returned is a copy of your Gigya JWT token (I think).

Phew! At the end of that process we should have:

  • A Gigya JWT token
  • A Kamereon account ID
  • A Kamereon token
and the means to regenerate those tokens when they expire.

Listing vehicles

I mentioned you can add more than one vehicle to an account. Before you can do much you'll need the VIN of the vehicle you're interested in. You might well have that to hand if it's your vehicle, but in the general case you'll need to get the list from /accounts/{Kamereon account ID}/vehicles?country=XX. You'll need three auth headers:
  • apikey: the Kamereon API key
  • x-gigya-id_token: your Gigya JWT token
  • x-kamereon-authorization: Bearer {kamereon token}
You'll note that the vehicle's registration plate is also included in the data now. That's a nice feature that ought to make it easier for end-users to identify vehicles in multi-vehicle accounts. As with the previous API, though, everything is keyed off the VIN (which makes sense, since the registration plate could change - though I've no idea if Renault's systems would reflect that sort of change).

Interacting with a vehicle

So, what can we do?

Each of these endpoints is under /accounts/kmr/remote-services/car-adapter/v1/cars/{vin}, and each requires the same three auth headers described above. The server expects a Content-type of application/vnd.api+json. For the most part, the returned data is self-explanatory.

Reading data

  • /battery-status (GET): plugged in or not, charging or not, percentage charge remaining, estimated range (given in km). Presumably, as with the previous API when charging, information about the charge rate etc - I've not tried this yet.
  • /hvac-status (GET): Air conditioning on or off, external temperature, and scheduled preconditioning start time. I've not seen it report anything other than AC off, even when preconditioning was running - possibly as a result of caching somewhere? External temperature seems accurate.
  • /charge-mode (GET): Always charging or on the scheduler? (There's possibly a third state for the in-car "charge after X" mode - I've not investigated this yet. The "charge after" in-car setting is represented here as always_charging.)
  • /charges?start=YYYMMDD&end=YYYYMMDD (GET): Detail for past charges. This isn't currently returning any useful data for me. end cannot be in the future.
  • /charge-history?type={type}&start={start}&end={end} (GET): aggregated charge statistics. type is the aggregation period, either month or day; for 'month' start and end should be of the form YYYYMM; for 'day' it should be YYYYMMDD. Again, this is not supplying any useful data for me right now.
  • /hvac-sessions?start=&end= (GET): Preconditioning history
  • /hvac-history?type=&start=&end= (GET): Same as charge history but for preconditioning stats.
  • /cockpit (GET): odometer reading (total mileage, though it's given in kilometers).
  • /charge-schedule (GET): the current charge schedule - see later.
  • /lock-status (GET): The server returns 501 Not Implemented.
  • /location (GET): The server returns 501 Not Implemented.
  • /notification-settings (GET): Settings for notifications (d'uh) - as well as email and SMS, there's also now an option for push notifications via the app, for each of "charge complete", "charge error", "charge on", "charge status" and "low battery alert" / "low battery reminder".

Writing data

Each of these expects a JSON object body of the form:

{
  "data": {
    "type": "SomeTypeName",
    "attributes": {
      (... the actual data ...)
    }
  }
}

In many cases, you'll be re-POSTing a similar object to that which you received for the corresponding GET method. A success response from the server tends to be the same object you just POSTed, with an ID added. I've listed the required attributes where I know them.

  • /actions/charge-mode (POST): sets the charge mode. (Type ChargeMode)
    • action: either schedule_mode or always_charging (possibly a third value - see above)
  • /actions/hvac-start (POST): Sets the preconditioning timer, or turns on preconditioning immediately. (Type HvacStart)
    • action: start
    • targetTemperature: in degrees Celsius. The app is hard-coded to use 21°C.
    • startDateTime: if supplied, should be of the form YYYY-MM-DDTHH:MM:SSZ - I'm not sure what would happen if you tried to use a different timezone offset. If not supplied, preconditioning will begin immediately.
  • /actions/charge-schedule (POST): Set a charge schedule. See later. (Type ChargeSchedule)
  • /actions/notification-settings (POST): Sets notification settings. (Type ZeNotifSettings)
  • /actions/send-navigation (POST): The much-vaunted "send a destination to your car and start navigating". I've not explored this one much but parameters include:
    • name
    • latitude
    • longitude
    • address
    • latitudeMode
    • longitudeMode
    • locationType
    • calculationCondition

The charge scheduler

Perhaps unsurprisingly, given the need to interoperate with the existing fleet of current Zoes, the charge scheduler functions in exactly the same way, and with the same limitations:
  • You must specify exactly one charging period per day, for every day of the week
  • Charge start and duration must be in intervals of 15 minutes
  • Charge duration must be at least 15 minutes
  • Charging periods must not overlap
  • Charge start time is specified as a four-character digit string e.g. "0115" (because that's how everyone represents time, right?)
  • Charge duration is specified as an integer, rather than a digit string as it was in the previous API
Interestingly, looking at the data structure, there's scope here to support multiple charging periods per day: each day has an array of periods against it. I wonder if the Zoe ZE50 might have different onboard software that isn't quite so arse-backwards as the Atos/Worldline system?

Conclusions

The new API is definitely different, and it's probably better than the old one. It certainly seems a lot faster to respond (it no longer takes several seconds to log in, for one). What it can't do is change the capabilities of the software deployed on vehicles already in the wild - hence the crappy charge scheduler remains, and no doubt people will still be annoyed at the infrequency of the battery state updates, especially on a rapid charger. (Aside: it's possible to tweak the parameters of your car's TCU to increase the frequency at which it reports its state to the server.)

I'm not sure why the login process is quite so convoluted, except that perhaps it needed to be given the constraints of interacting with a third-party authentication gateway (Gigya) and the Alliance Kamereon API which has its own set of requirements. I do feel that requiring three different tokens on each request is a little excessive!

We've lost a few odds and ends, none of which seemed particularly important:
  • It doesn't appear possible to cancel a preconditioning request
  • It doesn't appear possible to request a state update from the car (though it's possible this is now handled behind the scenes, as it was a bit cludgy anyway)
In each case, maybe it is possible, and I've simply not discovered that function yet.

What is clear, though, is that neither of these APIs were designed to be public - the requirement to have an API key for both Gigya and Kamereon makes that apparent, and that's the reason I've not included these keys in this post. If you've read this far, chances are you'll know exactly how and where in the MY Renault APK to find the configuration resource that lists them - you don't even need any specialist software to do so.

Update: Or you could grab the keys from Renault's own website where they've been uploaded in plain text for all to stumble upon.

Also clear is that the scope of these APIs extend far beyond the ZE-specific capabilities. I've not detailed them, as they're very much of secondary interest, but there's information available via the new app on all sorts, from the type of fuel your vehicle uses, to its service schedule and any optional extras you added when you bought it. I guess that's a natural part of moving to the "MY Renault" platform.

An implementation

The first cut of my CLI tool / thin Python API wrapper is now on GitHub.

Thursday, June 29, 2017

The long weekend: a retrospective

The Le Mans 24 Hours is the world's greatest motor race.

One of the toughest tests in motorsport, the race pits 180 drivers in 60 cars against the 8.5 mile Circuit de la Sarthe, and against the clock, with 24 hours straight of racing through the French countryside.

It's not just the drivers and cars that are put to the test; teams, too, face a battle to stay alert and react to the changing phases of the race. It's an endurance challenge for fans as well - at the track or around the world, staying awake for as long as possible to keep following a race which rarely disappoints in terms of action and (mis)adventure. The 2017 running of the 24 Hours was also something of a technical endurance challenge for this fan in particular...

Several years ago, unhappy with the live timing web pages made available by race organisers the WEC and ACO, I decided to start playing around with the data feed and see what I could come up with. Over the course of the 2015 race, I developed a prototype written in JavaScript that would later start to evolve and grow into something much bigger...

Fast-forward to 2017, and the Live Timing Aggregator was soft-launched before the 6 Hours of Silverstone via /r/wec and the Midweek Motorsport Listeners' Collective on Facebook. Despite having to debug the integration with a new format of WEC data from the grandstands of the International Pit Straight, and the system conking out a few hours into the race being held inconveniently on my wedding anniversary, feedback was overwhelmingly positive, and a few generous individuals even asked if they could donate money as a thank-you. The money let me move the system away from my existing server (which was becoming increasingly busy with other projects!) and onto a VPS of its own.

Sadly, though, the performance of the new VPS left a lot to be desired. On regular occasions, even loopback network connections were dropped, and when simply issuing ls would sometimes take more than ten seconds to execute, I decided that for the Big Race an alternative solution would be the safe bet; I took advantage of the AWS free tier to try and minimise my expenditure, and since the system isn't particularly CPU-intensive I didn't feel the restrictions on nano instances would be too arduous.

The AWS setup was ready in time for the Le Mans test day - the first opportunity the racing teams have to run on the full 24 Hours circuit, and the first opportunity for me to test the new setup. In all, over 1,500 unique users visited my timing pages that day, almost three times the previous high-water mark - helped by the inclusion of the per-sector timings that, while included in the official site's data feed, are inexplicably not displayed on the WEC timing pages.


In the following weeks, visitors enjoyed the "replay" functionality, giving them "as-live" timing replays of both test day sessions and the entire 2016 race, plus extensive live timing of a single-seater championship considered a "feeder" series into the WEC. Then into Le Mans week itself - and things started to get a bit nuts.

More and more people had caught word of the timing pages, and I was seeing steady traffic - as well as frequent messages via Twitter and email, some carrying thanks, some feature requests. One of the commentators at a well-known broadcaster even got in touch to say that they had no official timing from the circuit and that my site had made their TV production a whole lot easier! Many of the feature requests were already on my backlog, and there were a few I could sneak in as low-effort (although, deployments in race week seem a pretty bad idea in general).

Signs of not all being well were starting to become apparent - though not at my end. Rather, the WEC timing server seemed to be creaking under the load a little bit, and rather than updating every ten seconds (itself an age in motorsport terms!) there were five, ten, sometimes fifteen minutes between update cycles. I started research into an alternative data source which, at that stage at least, seemed to be more reliable. The modular nature of the timing system meant that it only took about an hour to get this alternative source (which I badged "WEC beta") up and running. (I ended up running both systems in parallel during the race, and once the WEC systems had calmed down there wasn't much difference between them.)

Peak visitors over practice and qualifying was about the same as for the test day. At this point, I had no idea of what was going to happen on Saturday afternoon...


Then real life interfered. For reasons I couldn't avoid, I had to be out of my house for the start of the race. Not only that, but the place I had to be had no mobile signal; and I ended up missing the first three hours of the race entirely.

I got home to find the world was on fire.

The WEC website had buckled under its own load (later claimed to be a DOS attack), which drove more and more visitors to my timing site. At some point, various processes reached their limits for open file handles. CPU usage had hit 100%, and stayed there. To make it a proper comedy of errors, I'd managed to leave my glasses, and my work laptop, at the friend's house at which I'd been at the start of the race - so I could only work by squinting an inch from the screen, or with sunglasses that rendered the monitors very dim. Nevertheless, I persevered...

First task was to get the load on the node under control. I took nginx offline briefly, and upped its allowed file handles. I also restarted the timing data collection process (which can be done while preserving its state). This helped very briefly - but after a few minutes, the number of connections was such that the data collection process itself was losing connection to the routing process, so no timing data could get in or out.

It was then that I had a brainwave - I could shunt the routing process (a crossbar.io instance) onto its own node, reducing the network and CPU load on the webserver and data collection node. I still had the code on the slow VPS, so I just needed to reactivate it, and patch the JavaScript client to connect to the timing network via the VPS rather than the AWS node. I also removed nginx as a proxy to the crossbar process, reducing the overhead - crossbar is capable of managing a large number of connections itself.

It turns out network IO on the VPS is adequate for the task, and over the next hour or so, things started to stabilise. I'd also decided to reduce network load by disabling the analysis portion of the site - which is a shame, as the stint-length and drive-time analyses were written with Le Mans in mind. I'll need to re-architect that portion somewhat, as the pub/sub model has proved to be an expensive one compared to request-response, especially with a large number of cars.

I'm grateful to those on Twitter and Reddit who, at this point, started to encourage me to not forget actually watching and enjoying the race! Thankfully, after another round of file handle limit increases (turns out that systemd completely ignores /etc/security/limits.conf and friends) - and my loving and patient spouse having retrieved my spectacles - I could do just that, only occasionally needing to hit it with a hammer to get it running again.

I also have some ideas to work with to improve function under load in future. Separating the data-collection process from the WAMP router one was a good idea, but still the former can be squeezed out of connectivity with the latter. Some method of ensuring that "internal" connections are given priority will keep up performance of the service for those users still connected. Upping the file handle limit and opening Crossbar directly helped increase the concurrent user count - around 10,000 over the course of the race - but a way of spreading that load over multiple nodes is going to be needed to go much beyond that.

The official WEC timekeepers, Al Kamel Systems, publish on their website a "chronological analysis" - a 3MB CSV file containing every lap and sector time set during the race. I wonder what effort will be involved to reconstruct the missing timing data from the first part of the race, into a replay-able format for my site...

Friday, January 30, 2015

I'm racing at Silverstone!

Silverstone Wing and International Pits Straight. © James Muscat


It started, as these things occasionally do, with a dream.

The home of the British Grand Prix. The 3.66-mile ribbon of tarmac graced by the likes of Schumacher, Alonso, Button, and Hamilton, not to mention their many peers and forerunners. The magnificent, high-speed Maggotts/Becketts/Chapel sequence of corners. I'm a huge motorsport fan (some would say 'obsessive'); how could I pass up the opportunity to enter a race at Silverstone?

The only trouble is: I'm not going to be in a car. I'm not even going to be on a bike. I'm running the Silverstone Half Marathon on March 15th, and I'm doing so to raise money for Cancer Research UK.

Back in October, I had a dream in which I was running the Silverstone Half Marathon, and no, I don't know why. The most puzzling part of the dream was that I was running the wrong way around the track when I got to the finish line. I happened to mention this to a serial marathon-running colleague, who looked up the route map to discover that, in fact, the last lap of the race is run against the usual flow of the circuit!

Curious, but not enough to make me sign up for the darn thing. No, that would be the fault of another serial marathon-running colleague, Cristin. You can read her side of the story on her blog.

Both of those colleagues are also running the race with me, and Cristin and I are fundraising together. Please would you take a moment to visit our fundraising page, and sponsor whatever you feel you can?

Another curious part of that dream came after I'd crossed the finish line. I got a celebratory kiss from the girl who, in real life a few days later, would become my girlfriend... but that's another story! Her mum has, after a year-long fight, beaten cancer; that's one of the reasons I chose to raise money for Cancer Research UK. A colleague of ours is also fighting her own fight right now.

If you can, please sponsor us to the chequered flag (there had better be a chequered flag!), and help CRUK continue their work against cancer.

Our fundraising page is at Virgin Money Giving.
Wet GP3 qualifying, Silverstone 2014. © James Muscat