Code Monkey home page Code Monkey logo

p4runtime's Introduction

P4Runtime Specification

This directory contains protobuf files, specifications and related artifacts for all versions of the P4Runtime API. Documentation and protobuf definitions are placed into two distinct top-level directories. In each of these directories, files are organized based on the P4Runtime major version number (e.g. v1) as follows:

.
├── docs
│   └── v1  # documentation for P4Runtime v1
├── proto
│   └── p4
│       ├── config
│       │   └── v1  # p4.config.v1 protobuf package (P4Info message definition)
│       └── v1  # p4.v1 protobuf package (P4Runtime service definition)

Git tags are used to mark minor and patch release versions.

Reading the latest version of the documentation

The latest version of the P4Runtime v1 specification is available:

  • here in HTML format
  • here in PDF format

It is updated every time a new commit is pushed to the main branch.

Overview

P4 is a language for programming the data plane of network devices. The P4Runtime API is a control plane specification for controlling the data plane elements of a device or program defined by a P4 program. This repository provides a precise definition of the P4Runtime API via protobuf (.proto) files and accompanying documentation. The target audience for this includes system architects and developers who want to write controller applications for P4 devices or switches.

Community

  • Meetings: the P4.org API Working Group meets every other Friday at 9:30AM (Pacific Time). Please see the P4 Working Groups Calendar for meeting details.
  • Email: join our mailing list to receive announcements and stay up-to-date with Working Group activities.
  • Slack: ask to join the P4 Slack Workspace to get (or provide!) interactive help.

Compiling P4Runtime Protobuf files

Build Using Docker

You can use Docker to run the protoc compiler on the P4Runtime Protobuf files and generate the Protobuf & gRPC bindings for C++, Python and Go:

docker build -t p4runtime -f codegen/Dockerfile .
docker run -v <OUT>:/out/ -t p4runtime /p4runtime/codegen/compile_protos.sh /out/

This will generate the bindings in the local <OUT> directory. You need to provide the absolute path for <OUT>. The default Docker user is root, so you may need to change the permissions manually for the generated files after the docker run command exits.

These commands are the ones used by our CI system to ensure that the Protobuf files stay semantically valid.

Build Using Bazel build protobufs

The protobufs can also be built using Bazel. The Bazel WORKSPACE and BUILD files are located in the proto folder.

To build, run

cd proto && bazel build //...

We run continuous integration to ensure this works with the latest version of Bazel.

For an example of how to include P4Runtime in your own Bazel project, see bazel/example.

Modification Policy

We use the following processes when making changes to the P4Runtime specification and associated documents. These processes are designed to be lightweight, to encourage active participation by members of the P4.org community, while also ensuring that all proposed changes are properly vetted before they are incorporated into the repository and released to the community.

Core Processes

  • Only members of the P4.org community may propose changes to the P4Runtime specification, and all contributed changes will be governed by the Apache-style license specified in the P4.org membership agreement.

  • We will use semantic versioning to track changes to the P4Runtime specification: major version numbers track API-incompatible changes; minor version numbers track backward-compatible changes; and patch versions make backward-compatible bug fixes. Generally speaking, the P4Runtime working group co-chairs will typically batch together multiple changes into a single release, as appropriate.

Detailed Processes

We now identify detailed processes for three classes of changes. The text below refers to key committers, a GitHub team that is authorized to modify the specification according to these processes.

  1. Non-Technical Changes: Changes that do not affect the definition of the API can be incorporated via a simple, lightweight review process: the author creates a pull request against the specification that a key committer must review and approve. The P4Runtime Working Group does not need to be explicitly notified. Such changes include improvements to the wording of the specification document, the addition of examples or figures, typo fixes, and so on.

  2. Technical Bug Fixes: Any changes that repair an ambiguity or flaw in the current API specification can also be incorporated via the same lightweight review process: the author creates a GitHub issue as well as a pull request against the specification that a key committer must review and approve. The key committer should use their judgment in deciding if the fix should be incorporated without broader discussion or if it should be escalated to the P4Runtime Working Group. In any event, the Working Group should be notified by email.

  3. API Changes Any change that substantially modifies the definition of the API, or extends it with new features, must be reviewed by the P4Runtime Working Group, either in an email discussion or a meeting. We imagine that such proposals would go through three stages: (i) a preliminary proposal with text that gives the motivation for the change and examples; (ii) a more detailed proposal with a discussion of relevant issues including the impact on existing programs; (iii) a final proposal accompanied by a design document, a pull request against the specification, and prototype implementation on a branch of p4runtime, and example(s) that illustrate the change. After approval, the author would create a GitHub issue as well as a pull request against the specification that a key committer must review and approve.

When updating the Protobuf files in a pull request, you will also need to update the generated Go and Python files, which are hosted in this repository under go/ and py/. This can be done easily by running ./codegen/update.sh, provided docker is installed and your user is part of the "docker" group (which means that the docker command can be executed without sudo).

Use generated P4Runtime library

Go

To include the P4Runtime Go library to your project, you can add this repository url to your go.mod file, for example:

module your_module_name

go 1.13

require (
  github.com/p4lang/p4runtime v1.3.0
)

Python

To install P4Runtime Python library, use the pip3 command:

pip3 install p4runtime
# Or specify the version
pip3 install p4runtime==1.3.0

Guidelines for using Protocol Buffers (protobuf) in backwards-compatible ways

P4Runtime generally follows "Live at Head" development principles - new development happens on the main branch and there are no support branches. New releases are periodically created from the head of main.

P4Runtime follows semantic versioning for release numbering, which means changes to the P4Runtime protobuf definitions have implications on the next release number. The team has tried its best so far to avoid a major version number bump, but recognizes that one may be necessary in the future.

Whenever possible, it is best to introduce new functionality in backward compatible ways. For example when role config was introduced, an unset (empty) role configuration implies full pipeline access, which was the default behavior before the feature was introduced.

There are no strict rules here for updating P4Runtime protobuf message definitions, only advice written by those with experience in using protobuf for applications while they have been extended over time. They are here for learning and reference:

Some brief points, but not the full story:

  • Do not change or reuse field numbers.
  • Be careful when changing types.
  • You can deprecate fields, but do not remove them (and make sure that you continue to support them) until you are sure that all clients and servers are updated.

p4runtime's People

Contributors

akarshgupta25 avatar ankur19 avatar antoninbas avatar bocon13 avatar ccascone avatar chrispsommers avatar dependabot[bot] avatar duskmoon314 avatar emanuelegallone avatar hashkash avatar jafingerhut avatar jonathan-dilorenzo avatar kishanps avatar konne88 avatar mkruskal-google avatar pudelkom avatar ray-milkey avatar rhalstea avatar richardjyu avatar saynb avatar smolkaj avatar stefanheule avatar sthesayi avatar teverman avatar verios-google avatar yi-tseng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

p4runtime's Issues

Specify the error code returned for full tables

It seems that right P4Runtime does not specify the error code that should be returned when the controller attempts to insert an entry into a full table. I propose we use RESOURCE_EXHAUSTED.

Ability to query for agent capabilities

We need to be able to query the P4 Runtime for Agent capabilities, especially for those which are optional. For example, rollback-on-failure and dataplane-atomic are optional; a target may support some batches in an atomic way but not others.

A controller could query the agent and determine what it supports, and potentially exploit enhanced capabilities which optimize the runtime interface. Conversely, the controller can avoid trying unsupported operations which will fail. Finally, this provides a tighter "contract" which can be validated by test harnesses which read the capabilities and test against them.

Let's accumulate a strawman list of capabilities so we can brainstorm how to represent them. I'll seed the list with:

  • api_version - the P4 Runtime version the agent currently implements
  • agent_capabilities[] A list of capabilities, consists of optional features as well as other TBD attributes. Should be easy to extend.

Extra credit and headaches:

  • api_versions[] - a list of the versions of the P4 Runtime API which the agent can support. This allows independent evolution of controllers and agents yet provide for compatibility. A controller could support one or more versions, likewise the agent could support one or more versions. Negotiation could establish the best match after connecting. One or the other party might only support a single version and this would dictate one possible choice (or none if there wan't a match).

Does P4Runtime define a way for controller to configure timeout value for tables with idle timeout notification?

If not, should it?

I have a half-memory that I may have created an issue on this topic before. I do recall Antonin saying that P4Runtime as defined today does not (yet) have a way to refer to properties of an entire table, so the answer may be the same here.

This need not hold up the v1.0 of the P4 Runtime API spec. If it doesn't get into there, it seems worth it to me to consider how to add such a feature after that release.

Can a multicast group id / a clone session id / a replica id be 0?

Usually in P4Runtime, 0 is an invalid id. Are things different for multicast group id / clone session id / replica id? Unlike an action profile member id for example (but much like a port id?), these ids map directly to P4 metadata fields. We need to make a decision and include it in the specification.

I think these ids are different from from other P4Runtime ids (e.g. action profile member id) which never leave the realm of the control-plane. They are also different from port ids IMO, since I would expect most targets to support a linear sequence of multicast group ids (for example) in the range [0, 2^W[ and we may not want to worry about mapping / translation like we did for ports.

Should we remove the "alias" field from the P4Info preamble?

With retrospect it is not clear to me what the value of "alias" is. P4_16 enables the programmer to set the name of every entity to an arbitrary value, even when it is nested within an arbitrary number of P4 controls. For example, annotating a table T1, nested within control X with annotation @name(".T2") will rename the table from X.T1 to T2.

If there is some value in providing an additional alias to an entity, I don't see it. And if there is some value, we should define an @alias annotation so that the programmer can set this alias.

I do believe that a library which uses P4Info to offer an API for mapping names to ids, may want to accept all unique prefixes for an entity instead of just the exact name. This is actually convenient when writing tests (see https://github.com/opennetworkinglab/fabric-p4test/blob/master/tests/ptf/base_test.py#L229 for example). However, that does not require the alias field at all (the comment in P4Info for the alias field is a bit misleading IMO).

It is probably my fault that we have alias in P4Info today :), but maybe it's not too late to remove it. And if it is useful to someone, we should clearly define what it's for in the P4Runtime spec.

Is "complete" flag in "ReadResponse" message required?

According to @wmohsin we initially introduce this flag to distinguish between normal stream termination and the stream terminating because of an error. It seems that this flag may actually not be necessary.
Here is some sample C++ client code for reading from a server stream:

std::unique_ptr<ClientReader<ReadResponse> > reader(
    stub_->Read(&context, request));
while (reader->Read(&response)) {
    // ...
}
Status status = reader->Finish();

If the server returns an error code at any time, the while loop breaks and the client obtains the gRPC status by calling Finish. If the stream is interrupted because of a connection issue, the while loop should also break and Finish will also return an error status, this time generated by the gRPC library (and not the server code).
In case of normal termination, the while loop breaks and Finish returns OK.

@wmohsin is there anything I'm missing or can we safely remove complete. Could you check if there is any other well-established gRPC service that has this flag. For example, it seems that gNMI does have a similar field in the SubscribeResponse message: https://github.com/openconfig/gnmi/blob/master/proto/gnmi/gnmi.proto#L235

Improve support for @defaultonly and @tableonly annotations in P4Info

  1. Improve "@defaultonly" and "@tableonly" support in P4Info by making them Protobuf fields:
message ActionRef {
  uint32 id = 1;
  enum Scope {
    DEFAULT = 0,
    DEFAULT_ONLY = 1,
    TABLE_ONLY = 2
  }
  repeated string annotations = 3;
}
  1. Remove const_default_action_has_mutable_params field which is meaningless for P4_16 programs

Read/Write register value using P4Runtime

Hi all,
I have recently started working with p4 and p4Runtime and I am now stuck on a register read/write problem.
I have defined a register in my p4 program and everything works fine when I read its value using the simple_switch_CLI. However, I wanted to do the same using the p4Runtime python API but I noticed that the .p4info file does not have any declaration regarding the register that I have created. As a result, i get the following error when I try to read the register value:

AttributeError: Could not find 'g_ingress.port_pkt_in' of type registers

Am I doing something wrong?
Thank you in advance for your help!

P.S. I apologize if this is not the right place to post this issue.

Does choice of representing serializable enums violate the read-write symmetry principle?

Section 8.5.5 titled "enum, serializable enum and error" says:

"When providing serializable enum values through P4Data, one can either use the enum entry's name (enum human-readable string field) or its assigned value (enum_value bytestring field)."

That seems to imply one of two things to me:

(1) it breaks read-write symmetry, because if a client chooses to write with the string, but reads back an integer (or vice versa), the messages are not equal

or

(2) read-write symmetry is preserved, because when comparing the read to the write data, one must consider a numeric value and its corresponding string value for a serializable enum to be equal.

or maybe there is a 3rd possibility I'm not thinking of.

P4Info doesn't include information about which action executes which direct resource

In PSA (unlike v1model), the P4 programmer must explicitly execute direct resources. Some actions assigned to the table may execute the direct resource (e.g. increment a direct counter) while others may not. Currently P4Info does not include this information: which of the table's actions execute which direct resource(s). It may be a desirable thing to known for a P4Runtime client. Therefore we may consider introduce this information in a future release of P4Runtime. I believe this can be done by modifying p4.config.v1.ActionRef in a backward-compatible way.

Bytestring length rules and upgrade compatibility issues

Under "General principles for message formatting" in the "Bytestrings" subsection, the server is required to return an error if the P4Info-defined bitwidth, rounded to bytes, is not equivalent to the bytestring length in the P4Runtime message. Enforcement of this requirement leads to awkward upgrade constraints.

Suppose a match field X exists, and X has a P4 program specified width of 9 bits. Now suppose a new version of the P4 program changes the width to 10 bits. Since the client and server are likely to upgrade at different times, a 10-bit-aware server will accept requests with a 9-bit X from an older client. Similarly, a server expecting a 9-bit X will accept client requests with a 10-bit X, subject to the constraint that the client cannot use the full range of X until the server also upgrades to the P4 config with 10-bit X. The same thing holds if X expands to 11, 12, 13, 14, 15, or 16 bits.

Unfortunately, if at some time X needs to expand to 17 bits, compatibility issues between client and server arise. If the server upgrades first, it must reject any X from old clients that continue to send X as 2 bytes in a P4Runtime request, even though the request value fits within the P4Info-specified width of 17 bits. The (p4_bitwidth + 7 / 8) computation will yield an expected byte size of 3 on the server, but the older client will be sending a string of 2 bytes, leading to an INVALID_ARGUMENT error from the server.

As shown above, the current byte string size requirement leads to an anomaly where expansion of bitwidths can occur gracefully across client and server versions as long as the expansion does not cross byte boundaries. However, once that boundary is crossed, it becomes difficult to gracefully roll out new versions of the P4 program.

An improved approach would require the server to assure that the bytestring provided by the client translates to an integer value that fits within the specified bitwidth. The server is free to accept bytestrings that appear to be padded with leading zeroes. The server can also accept bytestrings that omit leading zeroes.

Inconsistent names in proto message fields and clone examples

I am not up to speed on the full contents of this repo, so may be out of date, but I was reviewing the parts of the spec related to creating packet clones to multiple ports, and found some names that look inconsistent.

Here is the part of the proto file I'm looking at: https://github.com/p4lang/p4runtime/blob/master/proto/p4/v1/p4runtime.proto#L401-L436

and here is the part of the spec I'm looking at: https://github.com/p4lang/p4runtime/blob/master/docs/v1/P4Runtime-Spec.mdk#L3031-L3043

I would be happy to create a PR for making these consistent, assuming they should be.

Question: Should I assume the names in the proto file are the up to date ones, and update the names used in the spec to match? Or vice versa?

Ability to subscribe to different types of notifications from the P4Runtime client

In order to support PSA externs, we are going to introduce the notion of notification in the StreamChannel bi-directional stream. These notifications will be used for idle timeout, digests, etc...

Should a P4 Runtime client be able to explicitly choose which notifications it is interested in receiving? Or should we assume that it is covered by the Role description in the multi-controller proposal: p4lang/PI#286?

Clarify server behavior for malformed actions

Most of the spec defines a specific error for the server to return upon encountering protocol violations by the client. Section 9.1.2 "Action Specification" does not. The spec should define specific responses for these situations:

TableEntry INSERT or MODIFY is missing an expected Action ID, action profile member or group ID.

Action ID does not belong to the P4Info set for the table.

Action parameter is missing from the P4Runtime action.

Parameter ID is invalid according to the P4Info.

Parameter value is missing.

Define P4 blob-specific metadata e.g. compiler, build_date, etc.

In #276 I initially proposed to represent top-level metadata in PkgInfo.compiler and PkgInfo.build_date. The group deemed these inappropriate for P4Info which is supposed to be target-invariant, e.g. the same for any p4c front-end and unaffected by back-ends. However, this kind of metadata can still be useful for managing libraries of ForwardingPipelineConfig (P4 blobs), e.g. to categorize according to targets, build dates, etc. just like familiar types of SW releases.

  1. How do we represent this in P4 Runtime? Do we add it to ForwardingPipelineConfig or define something separately, in which case a new Get command might be needed?
  2. How do we populate it (e.g. compiler and cmd-line options?). This should eventually become a P4C issue.

IMO a controller may wish to query just p4info (and this new blob metadata) w/o getting p4_device_config, i.e. imagine "device discovery" where we want to query many devices and just get lightweight metadata, not huge images. This latter aspect might need a separate issue opened (be able to request P4Info by itself).

Find a more "automatic" way to manage references

It would be great to have a less manual way of doing this (no repeat of the link, in-order numbering)
I don't know if there is a convenient way to do it in Madoko, maybe we can abuse the bibliography support.

Write multicast group error.

Hi,
I'm having an issue when I try write a multicast group with p4runtime. The connection is OK and i can read/write counters and tables, but when I try write a multicast group I receive: grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, )>

the log of my controller is this:

connected with swith s1 on 127.0.0.1:50051
connected with swith s2 on 127.0.0.1:50052
connected with swith s3 on 127.0.0.1:50053

sending write request:

device_id: 2
election_id {
low: 1
}
updates {
type: INSERT
entity {
packet_replication_engine_entry {
multicast_group_entry {
multicast_group_id: 1
replicas {
egress_port: 1
instance: 1
}
replicas {
egress_port: 2
instance: 1
}
replicas {
egress_port: 3
instance: 1
}
}
}
}
}

Fix figure 7 in specification

I believe the top level message should be labelled grpc::Status, not google::rpc::Status (which is for the second level)

Is it possible to add an entry to a table if it has 0 key fields?

This question is relevant to any control plane API for configuring the tables in a P4 program, but if it is easier to answer in the context of the P4Runtime API specifically, that is enough for now.

The reason why I am bothering to ask: Working on a P4 test packet generation tool, and want to ensure that I am not missing any valid test cases, nor generating extra ones unnecessarily.

The open source P4 compilers often create tables that do not exist in the original P4 program that I will call 'dummy tables' here, although there is no official name for these that I know of in the language specification. These have have no fields in their keys.

My understand is that the P4Runtime API should reject as an error any message that one might attempt to create that could add an entry to such a table, either because:

  • the message mentions some key fields, which are an error because the table has none

or

  • the message mentions no key fields, and thus if it is a 'table insert' should fail because only 'table modify' is supported by P4Runtime API for a table's default actions.

Does all of that sound correct?

Add flag to MulticastGroupEntry to prune replicas for the ingress port

In some cases, a controller might want to create multicast groups that include the ingress port of a packet, while it would be desirable to have a way to instruct the PRE to avoid producing replicas directed to the ingress port. A use case for this feature is L2 broadcast groups for ARP requests, where the request is broadcasted to all ports associated with the same VLAN identifier, except for the ingress port.

Alternative approaches to produce an equivalent forwarding behavior are:

  1. Drop the packet in the egress pipeline.
  2. Create many multicast groups, each one excluding a different port, and provide match-action machinery to use one of such groups based on the packet ingress port.

In the first case, for some targets, this approach will consume unnecessary buffer and bandwidth resources for the outgoing direction on the ingress port. In the second case, it's an inefficient use of PRE resources, since we are creating many groups while we could have only one.

Instead, the traffic manager of some PSA targets provides capabilities to avoid producing replicas for the ingress port, but such capability is not exposed by P4Runtime.

My proposal is to extend the MulticastGroupEntry message with a new flag to instruct the PRE as described above. For example:

message MulticastGroupEntry {
  uint32 multicast_group_id = 1;
  repeated Replica replicas = 2;
  // If true and replicas contain the ingress port, instruct the
  // PRE to skip that replica. Default is false.
  bool with_ingress_port_pruning = 3;
}

Any thoughts? If there are no objections I can open a pull request with the proto change above.

Include more information about what a role is

This is based on question I get on p4.org.
In the "Master-Slave Arbitration and Controller Replication" section I suggest adding something like this:

The definition of a role (out-of-scope) may also include the following:
x Ability to send PacketOut messages to the server
x Ability to receive PacketIn messages from the server, along with filtering / subscription mechanism based on the value of the PacketMetadata fields for each PacketIn

As well as

Each master controller will only receive DigestList and IdleTimeoutNotification messages if the corresponding P4 object is included in its role.

Thoughts?

Add section about psa_empty_group_action

when describing action selectors or tables with action selector implementations
in particular add this restriction for v1.0: psa_empty_group_action has to be const to avoid making the TableEntry message too complex

Authorize standby controllers to "VERIFY" forwarding pipeline configurations

The current specification states that only the master controller can use the SetForwardingPipelineConfiguration RPC (https://s3-us-west-2.amazonaws.com/p4runtime/docs/master/P4Runtime-Spec.html#sec-setforwadingpipelineconfig-rpc). However, this seems like an unnecessary restriction for the VERIFY action, which just validates a configuration for a given target. Should we remove this restriction, or is there no value in being able to verify a pipeline configuration from a standby controller?

Clean-up all tables

They are currently not rendering very well, especially in the generated PDF.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.