ironcore-dev / ironcore Goto Github PK
View Code? Open in Web Editor NEWCloud Native Infrastructure as a Service
Home Page: https://ironcore-dev.github.io/ironcore
License: Apache License 2.0
Cloud Native Infrastructure as a Service
Home Page: https://ironcore-dev.github.io/ironcore
License: Apache License 2.0
The CRDs defined in this project should have a generated API reference documentation as part of the project.
One possible solution is to use a project like the gen-crd-api-reference-docs to generate the reference documentation from code.
Additionally a Makefile
directive should be added to update the documentation to the latest version.
"go.formatTool": "goimports",
"gopls": {
"formatting.local": "github.com/onmetal/onmetal-api",
},
Add a basic setup based on mkdocs
to generate a public facing user documentation. Additionally a corresponding Github action should automatically build and publish the documentation content under the gh-pages
branch.
The project should contain a helm chart for the controller manager and the corresponding CRDs under /charts
.
The chart should also have the option to install the CRDs into the cluster. Ideally via a values
property.
Alternatively we make the the controller manager able to install the CRDs into the cluster when the controller comes up.
Fix the installation warning when running make docs
go get: installing executables with 'go get' in module mode is deprecated.
Use 'go install pkg@version' instead.
For more information, see https://golang.org/doc/go-get-install-deprecation
or run 'go help get' or 'go help install'.
The list of tools and CLIs needed for running the project locally is growing pretty fast. We need to capture the necessary requirements in the /docs
. I would suggest to add a new section e.g. 'Prerequisites' in the Developer Guide section.
In order to refine the networking related CRDs we need a complete view over all possible networking scenarios a customer might have.
This issue should capture the high level overview over those scenarios. The details should be discussed in corresponding sub issues.
/cc @gehoern
apiVersion: network.onmetal.de/v1alpha1
kind: SubnetIP
metadata:
name: mySubnetIP
namespace: scope-8739274932
spec:
ip: 1.1.1.1
subnet:
name: mySubnet
scope: scope-43534543
target:
kind: Machine
apiGroup: compute.onmetal.de
name: myVM2
scope: scope-234243
status:
state: Assigned/Unassigned/Invalid
message: "All good"
Evaluate whether we need a RouteTable
resource to allow the use to customize how the routing between Subnet
should be handled. At the moment it is not sure if the concept of a Gateway
is enough to do the routing implicit.
apiVersion: network.onmetal.de/v1alpha1
kind: Subnet
metadata:
name: mySubnet
namespace: scope-8739274932
spec:
parentSubnet:
name: mySubnet1
scope: scope-3424233
locations: # optional
- regions: frankfurt
availabilitzZones:
- az1
- az2
ranges:
- cidr: 1.1.1.1/12
blockedRanges:
- 1.1.1.1/24
- cidr: 2.2.2.2/12
# size: "/12" # derived from the parent subnet - future settings
# or capacity: 12 == 12 ip addresses from parent subnet
status:
state: Up/Down/Invalid
message: "All good"
Describe the bug
Link to API reference documentation is broken https://github.com/onmetal/onmetal-api/blob/main/api-reference/overview (Error 404)
To Reproduce
Expected behavior
By clicking the link you need to be redirected to API reference documentation.
Currently the owner cache is located in the manager
package. This should be externalised similar to the usage cache #99
add a field "flags" as a key/value element.
because we have lots of flags that are probably not often used e.g:
type (remove from spec) -> use in flags
cmdline (remove from SourceAttribute) -> use in flags
and of course the customer can create custom flags
-> those flags need to be exposed in maybe labels or such and e.g. in ignition
Add a contribution guide to the project. It should describe how features can be proposed.
The Docker image for the ControllerManager should be published as a repository package.
Reference Network
inside the Subnet
resource to define a routing domain.
apiVersion: compute.onmetal.de/v1alpha1
kind: Image
metadata:
name: ubuntu-latest
namespace: onmetal
spec:
# type: standard/firewall/etc. ... move into annotations eventually
maturity: preview/production
expirationTime:
os: Ubuntu
version: 20.04
arch: amd64
source:
- name: ref
imageName: ubuntu-20.04
- name: kernel
url: https://
cmdLine: mitigations=off
hash:
algorithm: SHA1
value: lkjl4j5l34j5l43kj534lj5l
- name: initrd
url: https://
hash:
algorithm: SHA1
value: lkjl4j5l34j5l43kj534lj5l
- name: rootfs:
url: https://
hash:
algorithm: SHA1
value: lkjl4j5l34j5l43kj534lj5l
status:
state: Valid
hashes:
- name: kernel
hash: lj4l2kj4l3kj # normalized cache, today SHA3
- name: initrd
hash: lkljdlsajdsljda
regions:
- name: Frankfurt
state: Ready # cached and available for direct use
message: "Ready to consume"
- name: Tokio
state: Verified # can be downloaded
message: "Can be downloaded"
Write unit tests using Ginko and Gomega to test the functionality of the Scope
controller.
a one dimensional userdata is too flat:
it should be a key value definition e.g
userData:
- name: ignition
value: |
version: 1.3.0
ignition:
config:
merge:
- source: http://45.86.152.1/ipxe/passwd.json
- source: http://45.86.152.1/ipxe/bird.json
- source: http://45.86.152.1/ipxe/install.json
- source: http://45.86.152.1/ipxe/utils.json
storage:
files:
- path: /etc/systemd/network/lo.network
overwrite: yes
mode: 0644
contents:
inline: |
[Match]
Name=lo
[Network]
Address=100.64.10.121/32
Address=45.86.152.221/32myIgnitionJson
- name: UEFI
value: |
- name: PK
UUID: 8be4df61-93ca-11d2-aa0d-00e098032b8c
value: pki.Gardenlinux.io/kernel.crt
Describe the bug
The age field in the print column is currently not correct. The correct marker down below should be added to all resources
//+kubebuilder:printcolumn:name="Age",type=date,JSONPath=`.metadata.creationTimestamp`
The RoutingDomain
controller should reconcile RoutingDomain
resources.
tbd
tbd
When creating a Machine
, an interface targeting a Subnet
has to be specified. Once done, a controller should assign an IP address to said interface. This IP address in turn should be managed via IPAMRanges
backed by the Subnet
.
Tasks:
IPAMRange
from a Subnet
Machines
and create child IPAMRanges
of the Subnet
's IPAMRange
to obtain an IP address and assign it to the interface in the Machine
status.apiVersion: network.onmetal.de/v1alpha1
kind: SecurityGroup
metadata:
name: mySecGroup
namespace: scope-8739274932
spec:
ingress:
- name: webserver # optional
securityGroupRef:
name: mySecGroup
scope: scope-324342
- name: ssh # optional
action: allow/deny # allow == default
protocol: TCP # optional "*" if not set
portRange: 22 # optional same as protocol
source:
cidr: 1.1.1.1/16
#secGroupRef: # if cidr not set
# name: mySecGroup2
# scope: scope-1342345345
egress:
- name:
action: allow/deny
protocol:
portRange:
destination:
cidr:
secGroupRef: # if cidr not set
name: mySecGroup2
scope: scope-1342345345
status:
state: Used/Unused/Invalid
message: "All good"
This allows to enable/disable webhooks in common and standard way in addition to utilize CLI flag
Utilizing ENABLE_WEBHOOKS is predictable and standard approach for the kubebuilder. It would be nice to have a common and documented way for all controllers and helpful for the local running and testing without cert manager installation (fast local run).
In https://github.com/onmetal/onmetal-api/blob/26a1a194de5496d19fda1065ded0179eb50c3d89/pkg/logging/logger.go we have written a custom overlay ( Errorf
, Infof
) that bypasses strucured logging with Sprintf
-based methods.
A better approach would be to use structured logging directly, as it makes generating ingestion / discovery in Loki easier by allowing filtering via the extracted fields.
Instead of writing
log.Infof("creating namespace for scope %s", scope.Name)
we should write
log.Info("creating namespace", "scope", scope.Name)
Easier discovery in loki. Now we don't have to parse the message to find the field but can instead filter directly by the field.
Currently StorageClass
is a namespaced resource. We should move this to a global object similar to MachineClass
.
an image has an architechture.
so please define an element as discussed
A Scope
should have an optional region
field. This field should be used to ensure that all objects without a configured region attribute should inherit the scopes region. A mismatch should be rejected. That way we can group ensure that scope elements are located/pinned to a particular region.
apiVersion: network.onmetal.de/v1alpha1
kind: ReservedIP
metadata:
name: myReservedIP
namespace: scope-8739274932
spec:
subnet:
name: my-subnet
scope: my-scope
# ip: 1.2.3.4
# assignment:
# kind: Machine/LoadBalancer/Gateway
# apiGroup: compute.onmetal.de
# name: myMachine/etc.
# scope: myScope
status:
ip: 1.2.3.4
state: Assigned/Error/Available/Invalid
message: "All good"
Describe the bug
When a Machine
interface is being removed, the corresponding IPAMRange
object should be removed.
DoD:
IPAMRange
of an interface is gone when the interface is removedthe machine resource needs an image element for the machine itself -> means boot it to RAM
and it need a image element per volumeclaim in the machine spec:
currently the machine has a lot of implicit specs (e.g. volumeclaim, interface ...) so you can get for a generated machine a volumeclaim (actually this creates a volume on the fly and also a volume attachment, for the on-the-fly-volume and the eventually generated machine definition) -> this claim needs also the option to put an image on the volume -> so a disk can be preoccupied with the relevant OS. (see also volumeattachment/source)
btw. last mentioned disk would create the last boot entry so this would be booted, except you change some uefi configs on the machine definition where you can create your own bootorder :-D
StoragePool
and MachinePool
objects should be managed via a separate Request
object. That way a user can either request a new pool or initiate a change (e.g. capacity) of an existing pool.
apiVersion: network.onmetal.de/v1alpha1
kind: Network
metadata:
name: myNetwork
namespace: scope-8739274932
spec:
description: "My awesome network"
status:
state: Up/Down/Invalid
message: "All good"
i know we usually have the state field with message ...
machine needs this desperately ... also to see if the machine is there and maybe trigger other actions
use go IPaddr type instead of String
Write unit tests using Ginko and Gomega to test the functionality of the Account
controller.
Some fields must be marked as optional/required in the CRD definition. Here the corresponding kubebuilder/controler-gen marker should be used to generate the correct OpenAPIv3 spec. Currenlty we only use the omitempty
annotation in the CRD definition. We need to check if that is enough.
the machine resource has several parameters that could have e.g. a device path or an os device (e.g. volumes or interfaces)
we currently have realized this that we define e.g. /dev/sda. I think this limits us a lot on specific hardware infos -> here saying is a default scsi disk but what if this is mounted as an nvme drive also it in fact carries only the information what priority this disk has: e.g. sda = first scsi disk - btw being ambivalent if the sda ist the first disk or the later named nvme maybe is
proposed solution: replace with priority field e.g. VolumeClaim/device with VolumeClaim/priority what is an integer. we start counting with 0.
so e.g. /dev/sda = priority:0
btw. fun fact again:
bla disk "priority:0" is an nvme would technically result in nvme0 but the second drive
bla disk "priority:1" which could be a scsi device would result in a "/dev/sdb" so no double counting!
brilliant idea -> what if we use the order of the list for the priority so we do not have a priority defined then we take implicitly the order of the list, if there is a priority entry this one is taken (if there is a mixed list the entries with a priority are not counted)
As a user I want to define a Machine
resource.
apiVersion: compute.onmetal.de/v1alpha1
kind: Machine
metadata:
name: myMachine
namespace: scope-8739274932
spec:
hostname: myhost
machineClass: x3.xlarge
# (optional) pool: myReservedVMs
location:
region: eu-central
availabilityZone: f1
image:
scope: garden # global
name: ubuntu
sshPublicKeys:
- name: mykey # name or selector
scope: myScope
selector: # optional: either selector for current scope or all parent scopes
matchLabels:
username: myuser
interfaces:
- subnet:
name: mySubnetName
scope: someScopeNamespace
#name: eth0
#ip:
- reservedIP:
name: myReservedIP
scope: myscope
#name: eth3
- reservedIP:
name: mySecondReservedIP
scope: myscope2
#name: eth4
- subnet:
name: mycompanynet
scope: accountNamespace
#name: eth1
#ip:
securityGroups:
- name: mysecgroup1
- name: mysecgroup2
volumeClaims:
- name: rootdisk # first disk is root disk
retainPolicy: deleteOnTermiante # emphemeral in baremetal case
device: /dev/sda
storageClass: slow
size: 100GB
- name: tmpDisk
retainPolicy: deleteOnTermiante
storageClass: slow
device: /dev/sdb
size: 10GB
- name: data
retainPolicy: persistant
storageClass: slow
device: /dev/sdc
size: 100GB
- name: reusedisk
retainPolicy: persistant # not ephemeral
device: /dev/sdx
volume:
name: blubbber0234324
scope: scope-87973947
userData: |
myIgnitionJson
status:
features:
type: vm
# (optional)
# location:
# region: Frankfurt
# az: f1
# account: abcd1234
vCPU: 4
cpuClass: standard
memory: 1GB
memoryClass: standard
interfaces:
- subnet:
name: mySubnetName
scope: someScopeNamespace
name: eth0
ip: 1.1.1.1
- reservedIP:
name: myReservedIP
scope: myscope
name: eth3
ip: 8.8.8.8
subnet:
name: mySubnetName
scope: someScopeNamespace
- reservedIP:
name: myReservedIP
scope: myscope
name: eth3
ip: 4.4.4.4
subnet:
name: mySubnetName
scope: someScopeNamespace
- reservedIP:
name: myDynamicIP
scope: myscope
name: eth5
ip: 8.8.8.5
subnet:
name: mySubnetName
scope: someScopeNamespace
- subnet:
name: mySubnetName
scope: someScopeNamespace
name: eth0
ip: 1.1.1.2
volumes:
- name: rootdisk
storagePool:
name: myStoragePool
scope: myScope
device: /dev/sda
- name: tmpDisk
storagePool:
name: myStoragePool
scope: myScope
device: /dev/sdd
volumeAttachments:
- name: mydisk
volumeAttachment: myattachment-2
- name: dynamic/disk123234534
volumeAttachment: myattachment-2
The Subnet
controller should reconcile Subnet
resources and create corresponding IPAMRange
objects.
tbd
for imagename no scope reference is given
the imagename in the sources should be moved up next to sources:
apiVersion: network.onmetal.de/v1alpha1
kind: Gateway
metadata:
name: myGateway
namespace: scope-8739274932
spec:
region: frankfurt # how should this be mapped for cross region east west subnets
uplink:
subnet:
name: mysubnet
scope: scope-8745987349578
reservedIP: # optional
name: myIP
scope: scope-234342
SNAT: true # false
eastWest:
- name: mySubnet1
scope: scope-234234
- name: mySubnet2
scope: scope-897392
status:
state: Up/Down/Invalid
message: "All good"
ip: 1.1.1.1
I want to see the complete scope path from account to current scope in the status of every scoped object. This should be ensured by a MutatingWebhook
so that no user can change this state field.
As a user I should be able to define a public SSH key
apiVersion: compute.onmetal.de/v1alpha1
kind: SSHPublicKey
metadata:
name: my-pubkey
namespace: myscope-2342342
spec:
sshPublicKey: ssh-rsa jksdlkjadljasldjaldjasljdlasjdl
description: "my awesome key"
expirationDate: 03/14/2017
status:
state: Vailid/Expired
fingerPrint: 08:92:43:34:32
keyLength: 2048
algorithm: RSA/ECDSA ...
publicKey: kjlkjsldjsaldjsldjlj # PEM encoded public key
apiVersion: storage.onmetal.de/v1alpha1
kind: VolumeAttachment
metadata:
name: myVolumeAttachment
namespace: scope-8739274932
spec:
volume:
name: myVolume
scope: myscope
machine:
name: myMachine
scope: myscope
# device: /dev/sda
# source:
# image: myImage
# snapshot: mysnapshot
reclaimPolicy: retain # delete
status:
state: Attached/Invalid/Error
message: "All good"
device: /dev/sda
apiVersion: storage.onmetal.de/v1alpha1
kind: Volume
metadata:
name: myvolume
namespace: scope-8739274932
spec:
storageClass: fast-ssd
storagePool: onmetal/default
size: 100Gi
status:
state: Available/Pending/Attached/Error
message: "All good."
Describe the bug
The external references as defined in the *-config.json
in the hack/api-reference
folder needs to point to the correct location. Currently some of those links are broken.
This issue captures the end to end flow of a virtual machine creation. We will collect here the sample CRs of all involved objects in the flow starting with the Machine
type from the onmetal-api
project.
/cc @adracus @gehoern @hardikdr @byteocean @tetff @nikhilbarge
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.