Code Monkey home page Code Monkey logo

fleet-telemetry's Introduction

Build and Test Current Version DockerHub Tags

Tesla Fleet Telemetry


Fleet Telemetry is a server reference implementation for Tesla's telemetry protocol. Owners can allow registered applications to receive telemetry securely and directly from their vehicles. This reference implementation can be used by individual owners as is or by fleet operators who can extend it to aggregate data accross their fleet.

The service handles device connectivity as well as receiving and storing transmitted data. Once configured, devices establish a WebSocket connection to push configurable telemetry records. Fleet Telemetry provides clients with ack, error, or rate limit responses.

Configuring and running the service

Setup steps

  1. Create a third-party application on developer.tesla.com.
    • In the "Client Details" step, it is recommended to select "Authorization Code and Machine-to-Machine" for most use cases. "Machine-to-Machine" (M2M) should only be selected for business accounts that own vehicles.
  2. Generate an EC private key using the secp256r1 curve (prime256v1).
    • openssl ecparam -name prime256v1 -genkey -noout -out private-key.pem
  3. Derive its public key.
    • openssl ec -in private-key.pem -pubout -out public-key.pem
  4. Host this public key at: https://your-domain.com/.well-known/appspecific/com.tesla.3p.public-key.pem.
  5. Generate a Certificate Signing Request (CSR).
    • openssl req -out your-domain.com.csr -key private-key.pem -subj /CN=your-domain.com/ -new
  6. Ensure the generated CSR passes check_csr.sh.
    • ./check_csr.sh your-domain.com.csr
  7. Generate a Partner Authentication Token. (docs)
  8. Register your application with Fleet API by sending domain and csr to the register endpoint. Use the partner authentication token generated in step 7 as a Bearer token.
  9. Wait for Tesla to process your CSR. This may take up to two weeks. Once complete, you will receive an email from Tesla. The generated certificate will not be sent back to you; it is attached to your account on the backend and is used internally when configuring a vehicle to stream data.
  10. Configure your fleet-telemetry server. Full details are described in install steps.
  11. Validate server configuration using check_server_cert.sh.
    • From your local computer, create validate_server.json with the following fields:
      • hostname: the hostname your fleet-telemetry server is available on.
      • port: the port your fleet-telemetry server is available on. Defaults to 443.
      • ca: the full certificate chain used to generate the server's TLS cert/key.
    • ./check_server_cert.sh validate_server.json
  12. Ensure your virtual key has been added to the vehicle you intend to configure. To add your virtual key to the vehicle, redirect the owner to https://tesla.com/_ak/your-domain.com. If using authorization code flow, the owner of the vehicle must have authorized your application with vehicle_device_data scope before they are able to add your key.
  13. Send your configuration to a vehicle. Using a third-party token, send a fleet_telemetry_config request.
  14. Wait for synced to be true when getting fleet_telemetry_config.
  15. At this point, the vehicle should be streaming data to your fleet-telemetry server. If you are not seeing messages come through, call fleet_telemetry_errors.
    • If fleet_telemetry_errors is not yielding any results, please reach out to [email protected]. Include your client ID and the VIN you are trying to setup.

Install on Kubernetes with Helm Chart (recommended)

For ease of installation and operation, run Fleet Telemetry on Kubernetes or a similar environment. Helm Charts help define, install, and upgrade applications on Kubernetes. A reference helm chart is available here.

Install steps

  1. Allocate and assign a FQDN. This will be used in the server and client (vehicle) configuration.

  2. Design a simple hosting architecture. We recommend: Firewall/Loadbalancer -> Fleet Telemetry -> Kafka.

  3. Ensure mTLS connections are terminated on the Fleet Telemetry service.

  4. Configure the server (Helm charts cover some of this configuration)

{
  "host": string - hostname,
  "port": int - port,
  "log_level": string - trace, debug, info, warn, error,
  "json_log_enable": bool,
  "namespace": string - kafka topic prefix,
  "reliable_ack": bool - for use with reliable datastores, recommend setting to true with kafka,
  "monitoring": {
    "prometheus_metrics_port": int,
    "profiler_port": int,
    "profiling_path": string - out path,
    "statsd": { if you are not using prometheus
      "host": string - host:port of the statsd server,
      "prefix": string - prefix for statsd metrics,
      "sample_rate": int - 0 to 100 percentage to sample stats,
      "flush_period": int - ms flush period
    }
  },
  "kafka": { // librdkafka kafka config, seen here: https://raw.githubusercontent.com/confluentinc/librdkafka/master/CONFIGURATION.md
    "bootstrap.servers": "kafka:9092",
    "queue.buffering.max.messages": 1000000
  },
  "kinesis": {
    "max_retries": 3,
    "streams": {
      "V": "custom_stream_name"
    }
  },
  "rate_limit": {
    "enabled": bool,
    "message_limit": int - ex.: 1000
  },
  "records": { // list of records and their dispatchers, currently: alerts, errors, and V(vehicle data)
    "alerts": [
        "logger"
    ],
    "errors": [
        "logger"
    ],
    "V": [
        "kinesis",
        "kafka"
    ]
  },
  "tls": {
    "server_cert": string - server cert location,
    "server_key": string - server key location
  }
}

Example: server_config.json

  1. (Manual install only) Deploy and run the server. Get the latest docker image information from our docker hub. This can be run as a binary via ./fleet-telemetry -config=/etc/fleet-telemetry/config.json directly on a server, or as a Kubernetes deployment. Example snippet:
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: fleet-telemetry
spec:
  replicas: 1
  selector:
    matchLabels:
      app: fleet-telemetry
  template:
    metadata:
      labels:
        app: fleet-telemetry
    spec:
      containers:
      - name: fleet-telemetry
        image: tesla/fleet-telemetry:<tag>
        command: ["/fleet-telemetry", "-config=/etc/fleet-telemetry/config.json"]
        ports:
        - containerPort: 443
---
apiVersion: v1
kind: Service
metadata:
  name: fleet-telemetry
spec:
  selector:
    app: fleet-telemetry
  ports:
    - protocol: TCP
      port: 443
      targetPort: 443
  type: LoadBalancer

Vehicle Compatibility

Vehicles must be running firmware version 2023.20.6 or later. Some older model S/X are not supported.

Backends/dispatchers

The following dispatchers are supported

  • Kafka (preferred): Configure with the config.json file. See implementation here: config/config.go
  • Kinesis: Configure with standard AWS env variables and config files. The default AWS credentials and config files are: ~/.aws/credentials and ~/.aws/config.
    • By default, stream names will be *configured namespace*_*topic_name* ex.: tesla_V, tesla_errors, tesla_alerts, etc
    • Configure stream names directly by setting the streams config "kinesis": { "streams": { *topic_name*: stream_name } }
    • Override stream names with env variables: KINESIS_STREAM_*uppercase topic* ex.: KINESIS_STREAM_V
  • Google pubsub: Along with the required pubsub config (See ./test/integration/config.json for example), be sure to set the environment variable GOOGLE_APPLICATION_CREDENTIALS
  • ZMQ: Configure with the config.json file. See implementation here: config/config.go
  • Logger: This is a simple STDOUT logger that serializes the protos to json.

NOTE: To add a new dispatcher, please provide integration tests and updated documentation. To serialize dispatcher data as json instead of protobufs, add a config transmit_decoded_records and set value to true as shown here

Reliable Acks

Fleet telemetry allows you to send ack messages back to the vehicle. This is useful for applications that need to ensure the data was received and processed. To enable this feature, set reliable_ack_sources to one of configured dispatchers (kafka,kinesis,pubsub,zmq) in the config file. You can only set reliable acks to one dispatcher per recordType. See here for sample config.

Metrics

Configure and use Prometheus or a StatsD-interface supporting data store for metrics. The integration test runs fleet telemetry with grafana, which is compatible with prometheus. It also has an example dashboard which tracks important metrics related to the hosted server. Sample screenshot for the sample dashboard:-

Basic Dashboard

Logging

To suppress tls handshake error logging, set environment variable SUPPRESS_TLS_HANDSHAKE_ERROR_LOGGING to true. See docker compose for example.

Protos

Data is encapsulated into protobuf messages of different types. Protos can be recompiled via:

  1. Install protoc, currently on version 4.25.1: https://grpc.io/docs/protoc-installation/
  2. Install protoc-gen-go: go install google.golang.org/protobuf/cmd/[email protected]
  3. Run make command
make generate-protos

Airbrake

Fleet telemetry allows you to monitor errors using airbrake. The integration test runs fleet telemetry with errbit, which is an airbrake compliant self-hosted error catcher. You can set a project key for airbrake using either the config file or via an environment variable AIRBRAKE_PROJECT_KEY.

Testing

Unit Tests

To run the unit tests: make test

Common Errors:

~/fleet-telemetry➜ git:(main) ✗  make test
go build github.com/confluentinc/confluent-kafka-go/v2/kafka:
# pkg-config --cflags  -- rdkafka
Package rdkafka was not found in the pkg-config search path.
Perhaps you should add the directory containing `rdkafka.pc'
to the PKG_CONFIG_PATH environment variable
No package 'rdkafka' found
pkg-config: exit status 1
make: *** [install] Error 1

librdkafka is missing, on macOS you can install it via brew install librdkafka pkg-config or follow instructions here https://github.com/confluentinc/confluent-kafka-go#getting-started

~/fleet-telemetry➜ git:(main) ✗  make test
go build github.com/confluentinc/confluent-kafka-go/v2/kafka:
# pkg-config --cflags  -- rdkafka
Package libcrypto was not found in the pkg-config search path.
Perhaps you should add the directory containing `libcrypto.pc'
to the PKG_CONFIG_PATH environment variable
Package 'libcrypto', required by 'rdkafka', not found
pkg-config: exit status 1
make: *** [install] Error 1

~/fleet-telemetry➜ git:(main) ✗  locate libcrypto.pc
/opt/homebrew/Cellar/openssl@3/3.0.8/lib/pkgconfig/libcrypto.pc

~/fleet-telemetry➜ git:(main) ✗  export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/opt/homebrew/Cellar/openssl@3/3.0.8/lib/pkgconfig/

A reference to libcrypto is not set properly. To resolve find the reference to libcrypto by pkgconfig and set et the PKG_CONFIG_PATH accordingly.

libzmq is missing. Install with:

sudo apt install -y libsodium-dev libzmq3-dev

Or for macOS:

brew install libsodium zmq

Integration Tests

To run the integration tests: make integration To log into errbit instances, default username is [email protected] and default password is test123

Building the binary for Linux from Mac ARM64

DOCKER_BUILD_KIT=1 DOCKER_CLI_EXPERIMENTAL=enabled docker buildx version
docker buildx create --name go-builder --driver docker-container --driver-opt network=host --buildkitd-flags '--allow-insecure-entitlement network.host' --use
docker buildx inspect --bootstrap
docker buildx build --no-cache --progress=plain --platform linux/amd64 -t <name:tag>(e.x.: fleet-telemetry:local.1.1) -f Dockerfile . --load
container_id=$(docker create fleet-telemetry:local.1.1) docker cp $container_id:/fleet-telemetry /tmp/fleet-telemetry

Security and Privacy considerations

System administrators should apply standard best practices, which are beyond the scope of this README.

Moreover, the following application-specific considerations apply:

  • Vehicles authenticate to the telemetry server with TLS client certificates and use a variety of security measures designed to prevent unauthorized access to the corresponding private key. However, as a defense-in-depth precaution, backend services should anticipate the possibility that a vehicle's TLS private key may be compromised. Therefore:
    • Backend systems should sanitize data before using it.
    • Users should consider threats from actors that may be incentivized to submit falsified data.
    • Users should filter by vehicle identification number (VIN) using an allowlist if possible.
  • Configuration-signing private keys should be kept offline.
  • Configuration-signing private keys should be kept in an HSM.
  • If telemetry data is compromised, threat actors may be able to make inferences about driver behavior even if explicit location data is not collected. Security policies should be set accordingly.
  • Tesla strongly encourages providers to only collect data they need, limited to the frequency they need.
  • Providers agree to take full responsibility for privacy risks, as soon as data leave the devices (for more info read our privacy policies).

fleet-telemetry's People

Contributors

aaronpkahn avatar addp009 avatar agbpatro avatar dependabot[bot] avatar jamessturges avatar jordan-bonecutter avatar nathwang avatar patrickdemers6 avatar sethterashima avatar sjhill01 avatar thomasalxdmy avatar vmallet avatar yuhaojqka avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fleet-telemetry's Issues

best practice (+ maybe a feature request)

Hi, what is the best way to implement a logger?

For example while driving these fields are important:

  • Location
  • Heading
  • Battery level (SoC)
  • Rated range
  • Autopilot state
  • Gear
  • Speed
  • maybe also TPMS
  • Temperature inside/outside

But experience with FleeAPI shows, that already 4 fields with 10 seconds interval already exceeding API rate limits.
So, what is the best practice to do it?

Feature Request:
Client(=Logger) asks server for some fields (for example see above)
Server responds only that fields that was changed between last API calls from this client.
In this case Gear, Autopilot state, (speed), temperatures, battery level and some other doesn't change most of the time, so mostly only location and heading will be changed. that reduces traffic but the data quality stays the same.

How to confirm vehicles are streaming telemetry

We've setup our fleet server, supplied the CSR to the fleet support email and received confirmation, and can now register vehicles (and confirmed the tokens include the fleet urls in the aud array). But no data or requests are ever coming to our server.

Wondering if there is anything else we need to do, or anyway we can check if data is even trying to be sent.

Thanks!

Which port is used for telemetry?

Which network port should the telemetry service (load balancer) be listening on?

The setup instructions do not specify a port, and the config endpoint does not mention supplying a port.

I realise that the server allows specifying the listening port in server_config.json, but what port do the vehicles connect to? Is it 443?

Can you include the port in config.hostname in the telemetry config, eg "test-telemetry.com:8443"?

Config Sync - Questions/Observations

Thank you for all the hard work making Fleet Telemetry available to the developer community. We have successfully switched our app from polling to fleet telemetry and it is working well.

One thing we have noticed is that it takes a while for a vehicle to start sending data after it has been enrolled via 'fleet_telemetry_config create'. For our test fleet it has taken anywhere from a few hours to almost a full day for the sync to happen.

  • What determines the timing of the vehicle connecting to the backend to receive the target config?
  • Is there any way to force the sync or make the timing more predictable? If not today, are there any plans to improve this in the future?
  • Should telemetry data start as soon as the sync parameter returns true via 'fleet_telemetry_config get'?

One of our test vehicles is a 2018 Model S (Gen 2). We know this vehicle is not (yet) supported by Fleet Telemetry but we are successfully able to push a config to it (sync flips to true) but we do not receive telemetry (as expected). Should an error be returned when targeting an unsupported model?

Any insight you can offer is much appreciated. Keep up the great work!

Deployment Fails

Error: buildx failed with: ERROR: failed to solve: fleet-telemetry-integration-tests: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed

Likely caused by #38

Kafka partition support for scaling

We have too many vehicles to make 1 partition <> 1 consumer work.

I've implemented a manual solution for multi-partition support, but it'd be really good for others if the partition count was configurable. I looked for a way to do this but didn't find any obvious way.

Total Transfer Rate Too High Question

I keep hitting this error, Its fixable by increasing the interval of some fields of course. But it could be helpful to know what the total allowed interval is. Has anybody been able to figure out what the total allowed interval is or how to determine it, any answers or resource will go a long way. Thanks

improve POST fleet_telemetry_config error 404 vin not_found for driver

Fleet telemetry was working for me, but now I'm receiving a 404 vin not found when trying to (re)configure it on a vehicle which was working before.

endpoint: POST /api/1/vehicles/fleet_telemetry_config
reply: http 404

{
  response: null,
  error: '<REDACTED VIN> not_found',
  error_description: ''
}

UPDATE: this was caused by attempting to use a Tesla account with DRIVER (not OWNER) access to the vehicle. See comments below for more. Tesla have indicated that this should start working in future.

How to generate and use the mTLS keys and cert?

Can anyone clarify how to generate the keys and certificate required to use fleet telemetry mTLS?

I can't find instructions on generating the keys in this repo.

The vehicle-command repo has instructions for generating a key for TLS for the http proxy, which are:

openssl req -x509 -nodes -newkey ec \
    -pkeyopt ec_paramgen_curve:secp521r1 \
    -pkeyopt ec_param_enc:named_curve  \
    -subj '/CN=localhost' \
    -keyout key.pem -out cert.pem -sha256 -days 3650 \
    -addext "extendedKeyUsage = serverAuth" \
    -addext "keyUsage = digitalSignature, keyCertSign, keyAgreement"

Will that work? Just need to set an appropriate value for subj?

What is the relationship between these fleet-telemetry mTLS keys and the public key registered against our domain for API access? Does the CN value in these keys need to exactly match the domain that was registered? Is the API private/public key pair somehow used in the creation of the mTLS keys?

Fleet Telemetry vs. Fleet API

I'm building a third-party service that needs to pull the location and heading of a user's vehicle once or twice daily. I'm currently calling wake_up and then waiting until the vehicle_data API is available. I've found that waking up is not always reliable and I'm worried that it could be contributing to battery drain. Would the fleet telemetry API be better for my use case?

Also, I notice that TeslaPy has a stream() function, has anyone tried using that?

Drivers can remove fleet-telemetry config

Drivers can send a DELETE request to fleet_telemetry_config and delete the config.

This is strange because only Owners can POST requests to fleet_telemetry_config.

If only owners can create, you would expect only owners to be able to delete.

Installing fleet telemetry on kubernetes

So, I setup a Kubernetes on DigitalOcean, connected to it and installed helm on my Mac.
So far so good...

No I went on to continue with the helm documentation:
https://github.com/teslamotors/helm-charts/blob/main/charts/fleet-telemetry/README.md

But on helm install fleet-telemetry teslamotors/fleet-telemetry -n fleet-telemetry --create-namespace I get this error:
Error: INSTALLATION FAILED: execution error at (fleet-telemetry/templates/1-secret.yaml:9:14): tlsSecret.tlsCrt is required

What additional steps I'm missing here. Can someone point me to the right direction?

Vehicles uses port 443 even when configured otherwise

This morning I received the email saying I was Fleet Telemetry ready.

I create a brand new CA cert using my private key, and issued a certificate to my fleet telemetry server. I then used the fleet_telemetry_config endpoint to install config on my Model 3. However the fleet_telemetry_errors endpoint has told me certificate signed by unknown authority

I ran ./check_server_cert.sh conf.json and it returned:

/tmp/tmp.xmvFF6Cnk1: OK
The server certificate is valid.

So it would seem my configuration and Fleet Telemetry server are configured correctly, yet my vehicle does not like my certificate authority.

Question regarding CSR publishing

I submitted a CSR Feb 7:th and I'm still waiting for approval. I attached the Application client_id and a short description of the fields I need and why. The CSR has also passed the tools/check_csr.sh script.

  • How long does confirmation usually take?

Integration Tests - Go Packages Reinstalled Every Time

When running integration tests twice in a row, Docker caching should minimize the work performed on the second run.

Current Behavior

Go packages are installed on both runs of the integration test suite. This takes upwards of one minute, wasting valuable developer time.

Expected Behavior

The Dockerfile and integration test script is configured in such a way that dependencies are cached between runs. The second run of the tests is faster since Go packages are not re-downloaded.

How to configure the server keys?

I have problem receiving vehicle data in my server. I'm not sure if it's the server key config issue or it's something else.

Anyhow I'd like to go through the steps I took and the result I'm experiencing and hopefully you can point me to the right direction:

Steps:

I have followed the guides based on the fleet docs

1. Register domain

I create the keys:

openssl ecparam -name prime256v1 -genkey -noout -out private-key.pem
openssl ec -in private-key.pem -pubout -out com.tesla.3p.public-key.pem

openssl req -new -x509 -key private-key.pem -out client-certificate.pem

I have registered a partner account domain:

I submitted client-certificate.pem as ca and host com.tesla.3p.public-key.pem as https://mysubdomain.high-mobility.com/.well-known/appspecific/com.tesla.3p.public-key.pem

curl https://fleet-api.prd.na.vn.cloud.tesla.com/api/1/partner_accounts --data '{...}'

{
    "response": {
        "client_id": '....',
        "domain": "mysubdomain.high-mobility.com",
        "ca": "-----BEGIN CERTIFICATE-----\n....",
        "public_key": "04418....b3c9",
   }
}

2. Distribute key in the car.

I also followed the stepped Distributing your public key in Vehicle Command SDK repo and the vehicle as my "Fleet Key"

IMG_2351 (1)

Question

So the question that I have is how to configure server_config.json ?

This is my current config:

{
  "host": "0.0.0.0",
  "port": 443,
....
  "records": {
   ...
   "V": ["logger", "kafka"]
  },
  "tls": {
      "server_cert": "/etc/certs/server/tls.crt",
      "server_key": "/etc/certs/server/tls.key"
  }
}

the tls.crt and tls.key are valid certificates I obtained for this domain from GlobalSign.

However I see that it doesn't work. No data in log nor in Kafka, the LB is receiving many request while I'm driving the car but all I see in the fleet-telemtey log is TLS errors.

Did I configure the server with wrong keys?

Notes

  1. I can confirm the TLS terminates on my fleet-telemetry instance
  2. I can verify the certificates are working because running
openssl s_client -connect mysubdomain.high-mobility.com:443 -servername mysubdomain.high-mobility.com -showcerts 

CONNECTED(00000006)
depth=2 OU = GlobalSign Root CA - R3, O = GlobalSign, CN = GlobalSign
verify return:1
depth=1 C = BE, O = GlobalSign nv-sa, CN = GlobalSign RSA OV SSL CA 2018
verify return:1
....
....
issuer=C = BE, O = GlobalSign nv-sa, CN = GlobalSign RSA OV SSL CA 2018
---
Acceptable client certificate CA names
CN = Tesla Issuing CA, O = Tesla Motors, L = Palo Alto, ST = California, C = US
CN = Tesla Motors GF Austin Product Issuing CA, OU = Motors, OU = PKI, O = Tesla Inc., C = US
CN = Tesla Motors GF Berlin Product Issuing CA, OU = Motors, OU = PKI, O = Tesla Inc., C = US
CN = Tesla Motors GF0 Product Issuing CA, OU = Motors, OU = PKI, O = Tesla Inc., C = US
CN = Tesla Motors GF3 Product Issuing CA, OU = Motors, OU = PKI, O = Tesla Inc., C = US
CN = Tesla Motors GF3 Product RSA Issuing CA, OU = Motors, OU = PKI, O = Tesla Inc., C = US
CN = Tesla Motors Product Issuing CA, OU = Motors, OU = PKI, O = Tesla Inc., C = US
CN = Tesla Motors Product RSA Issuing CA, OU = Motors, OU = PKI, O = Tesla Inc., C = US
CN = Tesla Motors Products CA
CN = Tesla Motors Root CA
CN = Tesla Policy CA, O = Tesla Motors, L = Palo Alto, ST = California, C = US
CN = Tesla Product RSA Root CA, OU = PKI, O = Tesla, C = US
CN = Tesla Product Root CA, OU = PKI, O = Tesla, C = US
CN = Tesla Root CA, O = Tesla Motors, L = Palo Alto, ST = California, C = US
Requested Signature Algorithms: RSA-PSS+SHA256:ECDSA+SHA256:Ed25519:RSA-PSS+SHA384:RSA-PSS+SHA512:RSA+SHA256:RSA+SHA384:RSA+SHA512:ECDSA+SHA384:ECDSA+SHA512:RSA+SHA1:ECDSA+SHA1
Shared Requested Signature Algorithms: RSA-PSS+SHA256:ECDSA+SHA256:Ed25519:RSA-PSS+SHA384:RSA-PSS+SHA512:RSA+SHA256:RSA+SHA384:RSA+SHA512:ECDSA+SHA384:ECDSA+SHA512
Peer signing digest: SHA256
Peer signature type: RSA-PSS
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 4798 bytes and written 426 bytes
Verification: OK
---
New, TLSv1.3, Cipher is TLS_AES_128_GCM_SHA256
Server public key is 2048 bit
This TLS version forbids renegotiation.
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
402B4345F87F0000:error:0A000412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate:ssl/record/rec_layer_s3.c:1586:SSL alert number 42

Enable GitHub Discussions for fleet-telemetry?

Would it make sense to have discussions for the repo so we can use that to open conversations rather than attaching it on emails or an open issue? it's under Repo -> Settings -> Features [ Discussions ]. it would provide for a nice discussion area. thanks for considering.

Which field provides the ChargingState?

Hi there,

I'm trying to retrieve the ChargingState from the telemetry data. I assumed this was returned in the ChargeState.

I assumed that ChargeState had type charging_value, so I could check i.e. for ChargeStateDisconnected (enum value 1).

However, ChargeState only returns a string_value which is Idlewhenever the vehicle is not charging.

Help is appreciated, thanks in advance!

Support older Model S/X

Is it on the roadmap to support older Model S/X?

if not, is it because of missing virtual key (bluetooth) on older Model S/X?

Telemetry FQDN the same as fleet API domain?

Should the FQDN for fleet telemetry (as supplied in client_config.json) be the same as the domain supplied when registering a public key for vehicle pairing (virtual keys for signed commands) in the fleet API?

For example, our fleet API domain (for virtual keys) is tesla.chargehq.net.

Should (or MUST?) our FQDN for fleet telemetry be tesla.chargehq.net? Or can we use a different subdomain, eg telemetry.chargehq.net?

domain, region & server question

Hi,

I've only just sent my csr for approval so I haven't setup my server yet but have a couple of questions please.

  1. If I want to seperate my prod vehicles from test vehicles (where test vehicle data is streamed to server A and prod vehicle data is streamed to Server B) can I use the same 'Application' to do this?

  2. Can the same certificate for ourdomain.com be registered at both regional endpoints (I have test vehicles in both North US and European regions) and can I stream data from both regions to the same server?

  3. I am likely to need to introduce at least one more domain, but possibly more. Do I need a seperate 'Application' for each domain and would I be able to stream the data for multiple domains to the same server (Eg. ourdomain.com & theirdomain.com both streaming to server A)?

Thankyou

Validation failed: Csr is not a valid CSR

After following the installation instructions, including the check_csr, I get:
{"response":null,"error":"Validation failed: Csr is not a valid CSR","error_description":"","txid":"8ebdc3bed961facfc62dba400a988185"}

tools/check_csr returns:

CSR: elmert.com.csr
Host: www.elmert.com
Public keys are matching

I've regenerated the csr, but no luck.

Fix client_info_parse_error

We're seeing client_info_parse_error reported on new connections. This looks like a bug related to how the certificate is stored in the context.

TLS handshake error from 100.94.115.106:21975: EOF

Experiencing TLS Handshake Error

I've successfully deployed the Fleet Telemetry Service on EKS using the Helm Chart, and I've generated self-signed certificates for both the server and client using the same Certificate Authority (CA).
Despite a seemingly correct setup, I'm encountering a TLS handshake error when checking the POD logs.

2023/11/15 22:48:22 http: TLS handshake error from 100.94.115.106:38807: EOF 2023/11/15 22:48:23 http: TLS handshake error from 100.94.115.106:14949: EOF 2023/11/15 22:48:24 http: TLS handshake error from 100.94.115.106:27763: EOF 2023/11/15 22:48:26 http: TLS handshake error from 100.94.115.106:53347: EOF 2023/11/15 22:48:27 http: TLS handshake error from 100.94.115.106:29178: EOF 2023/11/15 22:48:28 http: TLS handshake error from 100.94.115.106:23737: EOF 2023/11/15 22:48:28 http: TLS handshake error from 100.94.115.106:20794: EOF 2023/11/15 22:48:29 http: TLS handshake error from 100.94.115.106:64515: EOF 2023/11/15 22:48:29 http: TLS handshake error from 100.94.115.106:64614: EOF 2023/11/15 22:48:30 http: TLS handshake error from 100.94.115.106:7163: EOF 2023/11/15 22:48:31 http: TLS handshake error from 100.94.115.106:2937: EOF 2023/11/15 22:48:32 http: TLS handshake error from 100.94.115.106:64564: EOF 2023/11/15 22:48:32 http: TLS handshake error from 100.94.115.106:21975: EOF

Could you provide guidance on troubleshooting this step?

Confusion about Server Setup

Hi thre,

We have succesfully started using the 3rd party API, and it's working well. Now we have a need to start streaming data, so we need to setup fleet-telemetry as well. We have setup the CSR steps as it's required on 3rd party as well. It is hosted on tesla.ourdomain.com/.well-known/appspecific/com.tesla.3p.public-key.pem

I have some prior experience with K8s, but I'm finding the steps to setup the server a little confusing. If I want to try out setting this whole service up using a minikube cluster to make sure everything is sorted, how do I go about doing that? Is that possibe at all? I have installed the helmcharts, but I'm getting a http://10.244.0.3:8080/status 10.244.0.3:8080: connect: connection refused error from the dashboard. What service is supposed to be hosted on port 8080 anyways?

I see that the README was updated to include more details on the previous steps, but I'd appreciate if you can please clarify/add more details to the actual server part.

For example, what is the docker-compose.yml file for? Is it possibel to test this locally at all, or we'd have to setup all the cloud infra first using K8s? Would appreciate some pointers!!

letsencrypt tls server cert error: unable to verify the first certificate

Hi @patrickdemers6 . I've followed your guide and generated a cert for mTLS using certbot / Let's Encrypt. The certificate has been delivered and loaded by the fleet-telemetry server however it fails the first step of the check_server_cert.sh tool.

When running this step manually I get the following output:

jay@Jays-MacBook-Pro fleet-telemetry % openssl s_client -connect "tesla.chqtest.net:443" -servername "tesla.chqtest.net" -showcerts 2>/dev/null            
CONNECTED(00000005)
---
Certificate chain
 0 s:CN=tesla.chqtest.net
   i:C=US, O=Let's Encrypt, CN=R3
   a:PKEY: id-ecPublicKey, 256 (bit); sigalg: RSA-SHA256
   v:NotBefore: Mar  2 00:41:56 2024 GMT; NotAfter: May 31 00:41:55 2024 GMT
-----BEGIN CERTIFICATE-----
MIIEJDCCAwygAwIBAgISAzTarMvBl9d/8+FWEIMiGfb8MA0GCSqGSIb3DQEBCwUA
MDIxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQD
EwJSMzAeFw0yNDAzMDIwMDQxNTZaFw0yNDA1MzEwMDQxNTVaMBwxGjAYBgNVBAMT
EXRlc2xhLmNocXRlc3QubmV0MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEeFFA
gnqtpE3oXAS0pZsJdw3xIpneD0ISiwOT1KnpMxeha2nQTlmuUrS2YAqmxJ4H+qfm
ExUlTyKjyuq1NZIIEKOCAhMwggIPMA4GA1UdDwEB/wQEAwIHgDAdBgNVHSUEFjAU
BggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQU/0kK
h0lZN1Q3PI6zMwgD97zw2NAwHwYDVR0jBBgwFoAUFC6zF7dYVsuuUAlA5h+vnYsU
wsYwVQYIKwYBBQUHAQEESTBHMCEGCCsGAQUFBzABhhVodHRwOi8vcjMuby5sZW5j
ci5vcmcwIgYIKwYBBQUHMAKGFmh0dHA6Ly9yMy5pLmxlbmNyLm9yZy8wHAYDVR0R
BBUwE4IRdGVzbGEuY2hxdGVzdC5uZXQwEwYDVR0gBAwwCjAIBgZngQwBAgEwggEE
BgorBgEEAdZ5AgQCBIH1BIHyAPAAdQA7U3d1Pi25gE6LMFsG/kA7Z9hPw/THvQAN
LXJv4frUFwAAAY380zQEAAAEAwBGMEQCIHiQSBd4Uh3om8ODlB8EVD2L378nihTI
ZDdpNtVPhSwWAiBAiiRcU5m7aMHygQ6HIYJXGw/0MrEGGu76OF3u7ba0WQB3AHb/
iD8KtvuVUcJhzPWHujS0pM27KdxoQgqf5mdMWjp0AAABjfzTNAoAAAQDAEgwRgIh
APD34ni4V8lkkUlD7COnM/tR6GIIb7M+1/74EylzChMoAiEArda8ZwkKtavOOfWf
3oCy8Iam3TzbWTjix8u6tbFRFxQwDQYJKoZIhvcNAQELBQADggEBAEJywcL+IQES
geQLSNK2MV7Ib+HVgv13Yoarg+JckpPJ7wgn+JDmZGgeJnwBg5BJkb8XOKPJF9q0
ryyGTOemXceeskiKiweV8QNYOBnJiwU2kNH6D+O8ZBSG+0f+XmvZTJ0z/NcwzOTN
WvyJjO+I6xw9II0750HcoUQVJWvSHQVotZUlOCumr2+eqLsXk1Ki4XsVX5AcG1IN
fK4MuZGk6uJijGK881Dt22jCVP7N0ZUuUDFFXJ5zzB54+Dzt2wj/AQS1sd9ZBXz8
eKvCYKJIW8KF7HE2E7C9rlvxtYpqh2z6DcZaumOMMckktyvJIbWkTgAYqk7RAuJT
hwsonBfIV1A=
-----END CERTIFICATE-----
---
Server certificate
subject=CN=tesla.chqtest.net
issuer=C=US, O=Let's Encrypt, CN=R3
---
Acceptable client certificate CA names
CN=Tesla Issuing CA, O=Tesla Motors, L=Palo Alto, ST=California, C=US
CN=Tesla Motors GF Austin Product Issuing CA, OU=Motors, OU=PKI, O=Tesla Inc., C=US
CN=Tesla Motors GF Berlin Product Issuing CA, OU=Motors, OU=PKI, O=Tesla Inc., C=US
CN=Tesla Motors GF0 Product Issuing CA, OU=Motors, OU=PKI, O=Tesla Inc., C=US
CN=Tesla Motors GF3 Product Issuing CA, OU=Motors, OU=PKI, O=Tesla Inc., C=US
CN=Tesla Motors GF3 Product RSA Issuing CA, OU=Motors, OU=PKI, O=Tesla Inc., C=US
CN=Tesla Motors Product Issuing CA, OU=Motors, OU=PKI, O=Tesla Inc., C=US
CN=Tesla Motors Product RSA Issuing CA, OU=Motors, OU=PKI, O=Tesla Inc., C=US
CN=Tesla Motors Products CA
CN=Tesla Motors Root CA
CN=Tesla Policy CA, O=Tesla Motors, L=Palo Alto, ST=California, C=US
CN=Tesla Product RSA Root CA, OU=PKI, O=Tesla, C=US
CN=Tesla Product Root CA, OU=PKI, O=Tesla, C=US
CN=Tesla Root CA, O=Tesla Motors, L=Palo Alto, ST=California, C=US
Requested Signature Algorithms: RSA-PSS+SHA256:ECDSA+SHA256:Ed25519:RSA-PSS+SHA384:RSA-PSS+SHA512:RSA+SHA256:RSA+SHA384:RSA+SHA512:ECDSA+SHA384:ECDSA+SHA512:RSA+SHA1:ECDSA+SHA1
Shared Requested Signature Algorithms: RSA-PSS+SHA256:ECDSA+SHA256:Ed25519:RSA-PSS+SHA384:RSA-PSS+SHA512:RSA+SHA256:RSA+SHA384:RSA+SHA512:ECDSA+SHA384:ECDSA+SHA512
Peer signing digest: SHA256
Peer signature type: ECDSA
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 2865 bytes and written 419 bytes
Verification error: unable to verify the first certificate
---
New, TLSv1.3, Cipher is TLS_AES_128_GCM_SHA256
Server public key is 256 bit
This TLS version forbids renegotiation.
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 21 (unable to verify the first certificate)
---

The exact same output is produced on a Linux server.

The fleet telemetry server is running at tesla.chqtest.net:443 so you can confirm for yourself. Server is dockerhub image tesla/fleet-telemetry:v0.1.11

Any help would be appreciated! Cheers.

HTTP server panics

I'm running docker image tesla/fleet-telemetry:v0.1.13

After adding the vehicle config via api/1/vehicles/:id/fleet_telemetry_config endpoint. I see this error in the fleet-telemetry logs

{"activity":true,"context":"fleet-telemetry","level":"info","method":"GET","msg":"request_start","remote_ip":"192.168.130.103:12842","time":"2024-04-04T16:36:38Z","urlPath":"/","uuid":"fdf28b05-4b08-4a77-86b4-989dd0b8fcdb"}
{"X-Forwarded-For":"","activity":true,"context":"fleet-telemetry","level":"info","method":"GET","msg":"socket_connected","network_interface":"","path":"/","time":"2024-04-04T16:36:38Z","txid":"ba87938c-e739-427e-8908-f3fa99c8d26c","user_agent":"Hermes/1.16.2 (vehicle_device)"}
{"activity":true,"context":"fleet-telemetry","duration_sec":0,"level":"info","msg":"socket_disconnected","time":"2024-04-04T16:36:39Z","total":"0"}
2024/04/04 16:36:39 http: panic serving 192.168.130.103:12842: runtime error: invalid memory address or nil pointer dereference
goroutine 92 [running]:
net/http.(*conn).serve.func1()
	/usr/local/go/src/net/http/server.go:1854 +0xbf
panic({0x1304fe0, 0x1f3bed0})
	/usr/local/go/src/runtime/panic.go:890 +0x263
github.com/beefsack/go-rate.(*RateLimiter).Try(0xc000431ae0)
	/go/pkg/mod/github.com/beefsack/[email protected]/rate.go:55 +0xe8
github.com/teslamotors/fleet-telemetry/server/streaming.(*SocketManager).ProcessTelemetry(0xc00028f200, 0xc00028f200?)
	/go/src/fleet-telemetry/server/streaming/socket.go:198 +0x23a
github.com/teslamotors/fleet-telemetry/server/streaming.(*Server).ServeBinaryWs.func1({0x1738090?, 0xc0000c2460?}, 0xc0000be800)
	/go/src/fleet-telemetry/server/streaming/server.go:102 +0x29a
net/http.HandlerFunc.ServeHTTP(0x0?, {0x1738090?, 0xc0000c2460?}, 0x0?)
	/usr/local/go/src/net/http/server.go:2122 +0x2f
net/http.(*ServeMux).ServeHTTP(0x1314880?, {0x1738090, 0xc0000c2460}, 0xc0000be800)
	/usr/local/go/src/net/http/server.go:2500 +0x149
github.com/teslamotors/fleet-telemetry/server/streaming.serveHTTPWithLogs.func1({0x1738090, 0xc0000c2460}, 0xc0000be800)
	/go/src/fleet-telemetry/server/streaming/server.go:73 +0x391
net/http.HandlerFunc.ServeHTTP(0x0?, {0x1738090?, 0xc0000c2460?}, 0x50794e?)
	/usr/local/go/src/net/http/server.go:2122 +0x2f
net/http.serverHandler.ServeHTTP({0xc0002fe630?}, {0x1738090, 0xc0000c2460}, 0xc0000be800)
	/usr/local/go/src/net/http/server.go:2936 +0x316
net/http.(*conn).serve(0xc0003ae870, {0x1738bd8, 0xc00055e780})
	/usr/local/go/src/net/http/server.go:1995 +0x612
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:3089 +0x5ed
{"context":"fleet-telemetry","level":"debug","msg":"return_stop_chan","time":"2024-04-04T16:36:39Z"}
{"context":"fleet-telemetry","level":"debug","msg":"writer_done","time":"2024-04-04T16:36:39Z"}

if I check the /api/1/partner_accounts/fleet_telemetry_errors endpoint, I see this error:

            {
                "created_at": "2024-04-04T16:47:45.308654633Z",
                "error": "\"webconnection error: read tcp 192.168.0.252:57868->99.80.125.251:443: i/o timeout\" cm_type=stream",
                "error_name": "cloud_manager_error",
                "hostname": "tesla-telemetry.mydomain.com",
                "name": "7dc3946c846c-42bc-a2fc-ec2c1117a1f7",
                "port": "443",
                "txID": "add6841e-f9c2-4ed1-8f89-eb90c8a0b325",
                "vin": "....."
}

Have anyone else facing this issue? It could be a miss configuration?


for the reference the servers's config is:

    {
      "host": "0.0.0.0",
      "port": 443,
      "log_level": "trace",
      "json_log_enable": true,
      "namespace": "tesla.original-vehicle-data.v1",
      "reliable_ack": true,
      "monitoring": {
        "prometheus_metrics_port": 9090,
        "profiler_port": 4269,
        "profiling_path": "/tmp/trace.out"
      },
      "rate_limit": {
        "enabled": false
      },
      "records": {
        "alerts": [
            "logger"
        ],
        "errors": [
            "logger"
        ],
        "V": [
            "logger"
        ]
      },
      "tls": {
        "server_key": "/etc/ssl-tesla/key.key",
        "server_cert": "/etc/ssl-tesla/key.crt"
      },
      "ca": "-----BEGIN CERTIFICATE-----\n...<content of full certificate chain file> ==\n-----END CERTIFICATE-----\n"
    }

How to?

As I'm not familiar with installing and managing a Kubertetes server.
I would like to use DigitalOcean for this.

Where can I find some good and easy to understand information on the topic?

Vehicle data enum definitions

The Field enums for the proto vehicle data are without definitions. This is hard for those of us who use this data. We need to understand it, to work with it correctly.

Please, add descriptions to these enums. It will help us all – those who want to use this thing and not as close to it.

I tried to fix it, with guesses and GPT4's help. I'll send a pull request. Maybe it's a start.

P.S Thanks for making great products, my model s is the greatest thing I have ever owned   😊

Server should be stable without metrics configured

Running the server without a monitoring block in the configuration causes a panic:
ex.:

    "monitoring": {
        "prometheus_metrics_port": 9273,
        "profiler_port": 4269,
        "profiling_path": "/tmp/trace.out"
    },

Fields not matching the fields in vehicle_data endpoint

Hi, I noticed that the fields that are accessible in the Payloads in this project do not cover all fields that are documented to be available for example in the vehicle_data endpoint. Especially for our use case, having access to the steering wheel heater is essential. Will fields like these be added, or is there some fundamental difference in the type of data that will be available via fleet-telemetry and the vehicle_data endpoints?

When creating a third-party token on behalf of a client, a server_error response was received.

Greetings.
Recently set up a 3rd party account in FleetAPI and tried to go through the authentication procedure to generate a client token, but encountered a problem at the stage of exchanging the authentication code for the access token and refresh token pair.

The response
{ "error": "server_error", "error_description": "Internal server error" }

This could be because we are not planning to generate a partner authentication token and hence skipping the step of generating the public/private key pair for signing commands ?

Note: we are using deeplink as redirect_uri

And an additional question: what is the purpose of redirect_uri in post /token endpoint? Per RFC 6749 there is no mention of POST /token requiring redirects.

Regards.

some vehicle data questions`

  1. How can I determine the total battery capacity from the vehicle data?
  2. How can I obtain the energy consumption information?
  3. How can I distinguish between the new Model 3 and the old Model 3?
  4. How can I differentiate between the long-range version and the standard version?

Fleet Telemetry doesn't work (anymore)

I received yesterday messages, but today it doesn't seem to get any messages. Maybe it is related to the port feature:
#114 (comment)

https://fleet-api.prd.eu.vn.cloud.tesla.com/api/1/vehicles/fleet_telemetry_config
with port set to 4445 will response:

{
	"response": {
		"updated_vehicles": 1
	}
}

Car is online.

https://fleet-api.prd.eu.vn.cloud.tesla.com/api/1/partner_accounts/fleet_telemetry_errors
just old errors from yesterday.

Docker up and running

2024/02/27 11:44:09 maxprocs: Leaving GOMAXPROCS=4: CPU quota undefined
time="2024-02-27T11:44:09Z" level=info msg=starting
time="2024-02-27T11:44:09Z" level=info msg=status_server_configured
time="2024-02-27T11:44:09Z" level=info msg=profiler_started
time="2024-02-27T11:44:09Z" level=info msg="registered kafka for namespace: tesla_telemetry"

http://localhost:8080/status

ok
BTW: it would be nice to see some more infos like connected cars...

if i connect from a browser i can see a connection has been declined:

2024/02/27 12:09:40 http: TLS handshake error from 78.43.56.144:26896: EOF
2024/02/27 12:09:40 http: TLS handshake error from 78.43.56.144:26897: EOF
2024/02/27 12:09:41 http: TLS handshake error from 78.43.56.144:26898: tls: client didn't provide a certificate

What else can I check?

How to "share a vehicle configuration with Tesla"?

Great project!

The README is pretty self-explanatory, but step 6 is a bit vague:

fleet-telemetry/README.md

Lines 114 to 127 in 2cb04c3

6. Create and share a vehicle configuration with Tesla
```
{
"hostname": string - server hostname,
"ca": string - pem format ca certificate(s),
"fields": { map of field configurations
name (string) -> {
"interval_seconds": int - data polling interval in seconds
}...
},
"alert_types": [ string list - alerts audiences that should be pushed to the server, recommendation is to use only "service" ]
}
```
Example: [client_config.json](./examples/client_config.json)

How can this config be shared? Is there some site where interested service providers (and hopefully individuals?) can sign up/upload the config?

Fleet Telemetry Server is started but no data is received

Moved this out of #122 and created new issue

@patrickdemers6 Update: I am able to fix the certificate issue. And using acme Bash https://github.com/acmesh-official/acme.sh/wiki/Issue-a-cert-from-existing-CSR i generated the certificates.

And now the server is running. Where do i check the logs?
Here is what i got when docker compose up. I have already created telemetry config for my tesla vehicle with full chain cert. No telemetry data is received to the server

$ docker-compose up
[+] Running 1/0
✔ Container fleetfiles-app-1 Created 0.0s
Attaching to app-1
app-1 | 2024/03/13 19:35:12 maxprocs: Leaving GOMAXPROCS=1: CPU quota undefined
app-1 | time="2024-03-13T19:35:12Z" level=info msg=config_skipping_empty_metrics_provider
app-1 | time="2024-03-13T19:35:12Z" level=info msg=starting
And no telemetry errors from fleet_telemetry_errors endpoint

{
"response": {
"fleet_telemetry_errors": []
}
}

Still check_server_cert.sh fails for the cert that i got from acme client.

/tmp/tmp.FMNmhcx9pU: CN = <domain name>
error 20 at 0 depth lookup:unable to get local issuer certificate
/tmp/tmp.FMNmhcx9pU: CN = <domain name>
error 20 at 0 depth lookup:unable to get local issuer certificate
The server certificate is invalid.

How is fleet telemetry enabled on a vehicle?

How is fleet telemetry enabled on a particular vehicle for a particular third party app?

Is there a step that the driver/owner needs to perform?

Does it build upon the Third Party App permission mechanism?

At what point is the third party client_config.json installed and activated on the vehicle?

We still haven't supplied our client_config.json to Tesla as there are no instructions as per #41.

Thanks!

How to send commands to a vehicle?

I was wondering if it will soon be possible to send commands to the vehicle via fleet-telemetry, for example to open or close doors or to enable engine start.

Is there already a plan when this will be available?

Thank you for the help!

README example command lines for openssl do not match

In the "Setup Steps" section, step 2 creates a file called "private-key.pem", but in step 5, the command references "private_key.pem". Copy/pasting these commands yields an error because of the mismatched file name.

Question regarding CSR Validation

We submitted a CSR back in December and are awaiting approval. Getting telemetry data is critical to the future of our product. Assuming we've done something incorrectly, we tried to validate the CSR using the new check_csr.sh script.

We have our public key registered at our root domain but have issued the CSR with the fleet telemetry endpoint as the common name.

The fleet telemetry endpoint is being hosted on a sub-domain (i.e. tesla.<rootdomain>.com), whereas the public key is hosted at the root: https://<rootdomain>.com/.well-known/appspecific/com.tesla.3p.public-key.pem

This causes check_csr.sh to fail as it reflects on the CN from the CSR to try and pull the public key from https://tesla.<rootdomain>.com

Does the CSR need to be supplied with the root domain as the common name, or does it need to be supplied with the eventual sub-domain for the fleet telemetry server?

If it turns out the root domain is what is expected, will it still work to have fleet telemetry hosted on a sub-domain?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.