An Ansible role to install and manage mdadm raid arrays.
- Available unpartitioned disk devices
None
- hosts: all
become: true
vars:
roles:
- role: ansible-mdadm
tasks:
BSD
Larry Smith Jr.
Describe the bug
If ansible-mdadm
is used to manage mdraid devices created via another means, duplicate entries may be created in mdadm.conf
. In the worst case, this may prevent a machine booting.
To Reproduce
As an example:
mdadm.conf
. Everything is fine so far. For example:# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/pv00 level=raid1 num-devices=2 UUID=982e5d2d:b145f191:52abcf49:5ebffa20
ARRAY /dev/md/boot level=raid1 num-devices=2 UUID=ec253f6b:45ef42ae:4182ed28:0cd28b9b
ARRAY /dev/md/boot_efi level=raid1 num-devices=2 UUID=26460904:051f0259:cfc9daf0:9e0898eb
ARRAY /dev/md124 level=raid1 num-devices=2 UUID=1ab96384:06a092f2:910f1eda:2255e814
ansible-mdadm
configuration and run the role.ansible-mdadm
writes the output from mdadm --detail --scan
to mdadm.conf
. The regex doesn't match the existing entries, resulting in two entries per device. For example: # mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/pv00 level=raid1 num-devices=2 UUID=982e5d2d:b145f191:52abcf49:5ebffa20
ARRAY /dev/md/boot level=raid1 num-devices=2 UUID=ec253f6b:45ef42ae:4182ed28:0cd28b9b
ARRAY /dev/md/boot_efi level=raid1 num-devices=2 UUID=26460904:051f0259:cfc9daf0:9e0898eb
ARRAY /dev/md124 level=raid1 num-devices=2 UUID=1ab96384:06a092f2:910f1eda:2255e814
# These added by ansible-mdadm:
ARRAY /dev/md/pv00 metadata=1.2 name=localhost.localdomain:pv00 UUID=982e5d2d:b145f191:52abcf49:5ebffa20
ARRAY /dev/md/boot metadata=1.2 name=localhost.localdomain:boot UUID=ec253f6b:45ef42ae:4182ed28:0cd28b9b
ARRAY /dev/md/boot_efi metadata=1.0 name=localhost.localdomain:boot_efi UUID=26460904:051f0259:cfc9daf0:9e0898eb
ARRAY /dev/md124 metadata=1.2 name=124 UUID=1ab96384:06a092f2:910f1eda:2255e814
** Expected behaviour **
mdadm.conf
, it should be replaced using the format from ansible-mdadm
.I'm happy to submit a patch if we agree on the expected behaviour. It would be easy to match by UUID, rather than the whole ARRAY
line. However, it could have side effects (removing level
, num-devices
from existing entries). Perhaps we should fail if this scenario happens, and prompt the user to take manual, remedial action instead?
Many thanks for your excellent work by-the-way.
Describe the bug
This is my first ever bug report, so please tell me if I can make it better.
This is potentially a duplicate of #21 .
While creating an array (Raid-5 in this case), the role runs correctly, including the arrays | Updating Initramfs
task. The array is working correctly. However, after a reboot, the OS fails to mount the array because it was automatically renamed to md127
.
Running update-initramfs -u
manually and rebooting fixes the issue. Running this command after the role in Ansible seems to work too (cf. below).
To Reproduce
Steps to reproduce the behavior:
- name: Raid-5 configuration
hosts: all # one single host
become: yes
tasks:
- name: Include mdadm role
include_role:
name: mrlesmithjr.mdadm
# - name: Update initramfs (bis)
# command: "update-initramfs -u"
# when: array_created.changed
mdadm_arrays:
- name: md0
devices:
- /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1
- /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2
- /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi3
filesystem: ext4
level: '5' # Raid type (0, 1, 5, 6...)
mountpoint: /srv/md0
state: 'present'
(When the three commented lines are un-commented, update-initramfs
is run a second time, and things work as expected even after a reboot.)
3. Reboot the VM
Expected behavior
The Raid array should keep the same name and therefore should be mounted without any problem.
Actual behavior
At boot, the VM displays something like
[ TIME ] Timed out waiting for device dev-md0.device - /dev/md0.
[DEPEND] Dependency failed for srv-md0.mount - /srv/md0.
[DEPEND] Dependency failed for local-fs.target - Local File Systems.
The system boots in emergency mode and the lsblk
command shows that the array is now called md127
.
Desktop (please complete the following information):
ansible [core 2.15.1]
config file = /path/to/project/ansible.cfg
configured module search path = ['/home/louis/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/louis/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/louis/.ansible/collections:/usr/share/ansible/collections
executable location = /home/louis/.local/bin/ansible
python version = 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
Additional context
Note that when running the playbook, the arrays | Updating Initramfs
task from this role seems to work (std_out is the usual "Generating /boot/..." and std_err is empty). But somehow, it needs to be run a second time to actually work.
I tried to create a RAID1 partition for LVM on an partially configured system and got the error message above.
The playbook looks like this:
---
- name: Setup myserver
hosts: myserver
become: true
vars:
mdadm_arrays:
- name: 'md2'
devices:
- '/dev/sda4'
- '/dev/sdb4'
filesystem: 'lvm'
level: 1
state: present
roles:
- role: mrlesmithjr.mdadm
At the console of the server I saw that there were partitions on the newly made /dev/md2:
# fdisk -l /dev/md2
Disk /dev/md2: 907.7 GiB, 973951991808 bytes, 1902249984 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x0008d9b9
Device Boot Start End Sectors Size Id Type
/dev/md2p1 2048 139460607 139458560 66.5G 83 Linux
/dev/md2p2 139462654 142604287 3141634 1.5G 5 Extended
/dev/md2p5 139462656 142604287 3141632 1.5G 82 Linux swap / Solaris
Partition 2 does not start on physical sector boundary.
Using the advice given at https://www.simplstor.com/index.php/support/support-faqs/118-lvm-dev-excluded-by-filter I was able to create the physical volume with these commands:
# wipefs -a /dev/md2
# pvcreate /dev/md2
Now the playbook runs fine but the manual intervention contradicts the purpose of Ansible.
Can you tell me what happened here?
Hello,
This is the configuration I've used based on the example from the readme.md.
mdadm_arrays:
- name: 'md0'
devices:
- '/dev/sdb'
- '/dev/sdc'
- '/dev/sdd'
level: '0'
mountpoint: '/mnt/md0'
state: 'present'
This results in an error:
TASK [ansible-mdadm : arrays | Ensure {{ mdadm_conf }} file exists] **********************************************************************************************************************************************************************************************
Sunday 28 July 2019 17:58:57 +0200 (0:00:00.030) 0:00:07.929 ***********
fatal: [nl-t1-kvm01]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'mdadm_conf' is undefined\n\nThe error appears to be in '/Users/nan03/projects/genesis/roles/ansible-mdadm/tasks/arrays.yml': line 115, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: arrays | Ensure {{ mdadm_conf }} file exists\n ^ here\nWe could be wrong, but this one looks like it might be an issue with\nmissing quotes. Always quote template expression brackets when they\nstart a value. For instance:\n\n with_items:\n - {{ foo }}\n\nShould be written as:\n\n with_items:\n - \"{{ foo }}\"\n"}
This error is easily resolved by adding a variable like this:
mdadm_conf: '/etc/mdadm/mdadm.conf'
I would suggest that this variable needs a default or must be defined mandatory as part of the usage instruction, whatever you think is best.
When trying create new RAID1 array, i got error from mdadm :
...
roles/mrlesmithjr.mdadm/tasks/arrays.yml:13
"stderr": "mdadm: specifying chunk size is forbidden for this level"
Additional info:
https://bugzilla.redhat.com/show_bug.cgi?id=1989958
man mdadm:
This is only meaningful for RAID0, RAID4, RAID5, RAID6, and RAID10.
Description
Multiple issues with differences between Debian and Fedora based systems. The biggest problems being the initramfs updates and the config file location.
To Reproduce
Expected behaviour
Script creates /etc/mdadm/
directory, then also creates the config file at /etc/mdadm/mdadm.conf
this location is only valid on debian based systems, as such the raid will not be correctly assembled at boot.
The update-initramfs
command is only valid in Debian based systems, Centos, RHEL, and Fedora all use dracut
to manage their initramfs images and versioning.
It is not recommended to update the initramfs in the most common use-cases, namely when the OS is not installed on the RAID. This is actually detrimental if the raid becomes degraded, is absent at boot, or the boot drive is encrypted.
DNF is not being called as the package manager for the systems that use it.
Desktop:
Describe the bug
I'm using this role to configure software raid on my secondary storage. My workflow for re-provisioning is to wipe and reimage the primary storage, but leave the secondary storage intact. I then re-run the ansible against the node. I've noticed that after re-provisioning in this fashion, mdadm.conf does not get populated.
Expected behavior
mdadm.conf should be populated with arrays defined by mdadm_arrays
Additional context
I think the issue is because this task relies on the previous one reporting status changed:
ansible-mdadm/tasks/arrays.yml
Lines 121 to 129 in 104bb82
Describe the bug
When you have a host in a group that has no mdadm_arrays
variable set, the role tries to create a default array.
To Reproduce
Steps to reproduce the behavior:
/dev/sda
& /dev/sdb
mdadm_arrays
variableinclude_role
or depends on the roleExpected behavior
I expect the playbook to do nothing if mdadm_arrays
is not defined.
Screenshots
TASK [mrlesmithjr.mdadm : arrays | Creating Array(s)] *************************************************
failed: [no_raid.test] (item={'name': 'md0', 'devices': ['/dev/sdb', '/dev/sdc'], 'filesystem': 'ext4', 'level': '1', 'mountpoint': '/mnt/md0', 'state': 'present', 'opts': 'noatime'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "yes | mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb /dev/sdc", "delta": "0:00:00.003582", "end": "2022-05-05 23:20:30.929538", "item": {"devices": ["/dev/sdb", "/dev/sdc"], "filesystem": "ext4", "level": "1", "mountpoint": "/mnt/md0", "name": "md0", "opts": "noatime", "state": "present"}, "msg": "non-zero return code", "rc": 2, "start": "2022-05-05 23:20:30.925956", "stderr": "mdadm: cannot open /dev/sdb: No such file or directory", "stderr_lines": ["mdadm: cannot open /dev/sdb: No such file or directory"], "stdout": "", "stdout_lines": []}
Desktop (please complete the following information):
Describe the bug
cannot create more than 1 md device, found error in task
To Reproduce
Steps to reproduce the behavior:
mdadm_arrays:
- name: 'md3'
devices:
- '/dev/sdc'
- '/dev/sdd'
- '/dev/sde'
- '/dev/sdf'
filesystem: 'ext4'
level: '10'
mountpoint: '/mnt/md3'
state: 'present'
- name: 'md4'
devices:
- '/dev/sdg'
- '/dev/sdh'
- '/dev/sdi'
- '/dev/sdj'
filesystem: 'ext4'
level: '10'
mountpoint: '/mnt/md4'
state: 'present'
TASK [mrlesmithjr.mdadm : arrays | Checking Status Of Array(s)]
ok: [db] => (item={u'state': u'present', u'name': u'md3', u'level': u'10', u'mountpoint': u'/mnt/md3', u'filesystem': u'ext4', u'devices': [u'/dev/sdc', u'/dev/sdd', u'/dev/sde', u'/dev/sdf']}) => {
...
"rc": 0,
STDOUT:
md3 : active raid10 sdd[3] sdc[2] sda[0] sdb[1]
ok: [db] => (item={u'state': u'present', u'name': u'md4', u'level': u'10', u'mountpoint': u'/mnt/md4', u'filesystem': u'ext4', u'devices': [u'/dev/sdg', u'/dev/sdh', u'/dev/sdi', u'/dev/sdj']}) => {
...
"rc": 1,
MSG:
non-zero return code
md4 device not found, RC=1
But task "Creating Array(s)" skipped creatinf md4 device
TASK [mrlesmithjr.mdadm : arrays | Creating Array(s)]
skipping: [db] => (item={u'state': u'present', u'name': u'md3', u'level': u'10', u'mountpoint': u'/mnt/md3', u'filesystem': u'ext4', u'devices': [u'/dev/sdc', u'/dev/sdd', u'/dev/sde', u'/dev/sdf']}) => {
"skip_reason": "Conditional result was False"
skipping: [db1] => (item={u'state': u'present', u'name': u'md4', u'level': u'10', u'mountpoint': u'/mnt/md4', u'filesystem': u'ext4', u'devices': [u'/dev/sdg', u'/dev/sdh', u'/dev/sdi', u'/dev/sdj']}) => {
"skip_reason": "Conditional result was False"
Error found into file tasks/arrays.yml
It was
# Creating raid arrays
# We pass yes in order to accept any questions prompted for yes|no
- name: arrays | Creating Array(s)
shell: "yes | mdadm --create /dev/{{ item.name }} --level={{ item.level }} --raid-devices={{ item.devices|count }} {{ item.devices| join (' ') }}"
register: "array_created"
with_items: '{{ mdadm_arrays }}'
when: >
item.state|lower == "present" and
array_check.results[0].rc != 0
**Error found in line " array_check.results[0].rc != 0" for result index **
FIX
file tasks/arrays.yml
# Creating raid arrays
# We pass yes in order to accept any questions prompted for yes|no
- name: arrays | Creating Array(s)
shell: "yes | mdadm --create /dev/{{ item.name }} --level={{ item.level }} --raid-devices={{ item.devices|count }} {{ item.devices| join (' ') }}"
register: "array_created"
# with_items: '{{ mdadm_arrays }}'
loop: "{{ mdadm_arrays }}"
loop_control:
index_var: index
when: >
item.state|lower == "present" and
array_check.results[index].rc != 0
I am applying index of array mdadm_arrays for array_check.results.RC
Playing after FIX
TASK [mrlesmithjr.mdadm : arrays | Creating Array(s)] *******************************************************************
skipping: [db] => (item={u'state': u'present', u'name': u'md3', u'level': u'10', u'mountpoint': u'/mnt/md3', u'filesystem': u'ext4', u'devices': [u'/dev/sdc', u'/dev/sdd', u'/dev/sde', u'/dev/sdf']})
changed: [db] => (item={u'state': u'present', u'name': u'md4', u'level': u'10', u'mountpoint': u'/mnt/md4', u'filesystem': u'ext4', u'devices': [u'/dev/sdg', u'/dev/sdh', u'/dev/sdi', u'/dev/sdj']})
TASK [mrlesmithjr.mdadm : arrays | Updating Initramfs] ******************************************************************
changed: [db]
TASK [mrlesmithjr.mdadm : arrays | Capturing Array Details] *************************************************************
ok: [db]
TASK [mrlesmithjr.mdadm : arrays | Creating Array(s) Filesystem] ********************************************************
ok: [db] => (item={u'state': u'present', u'name': u'md3', u'level': u'10', u'mountpoint': u'/mnt/md3', u'filesystem': u'ext4', u'devices': [u'/dev/sdc', u'/dev/sdd', u'/dev/sde', u'/dev/sdf']})
changed: [db] => (item={u'state': u'present', u'name': u'md4', u'level': u'10', u'mountpoint': u'/mnt/md4', u'filesystem': u'ext4', u'devices': [u'/dev/sdg', u'/dev/sdh', u'/dev/sdi', u'/dev/sdj']})
TASK [mrlesmithjr.mdadm : arrays | Mounting Array(s)] *******************************************************************
ok: [db] => (item={u'state': u'present', u'name': u'md3', u'level': u'10', u'mountpoint': u'/mnt/md3', u'filesystem': u'ext4', u'devices': [u'/dev/sdc', u'/dev/sdd', u'/dev/sde', u'/dev/sdf']})
changed: [db] => (item={u'state': u'present', u'name': u'md4', u'level': u'10', u'mountpoint': u'/mnt/md4', u'filesystem': u'ext4', u'devices': [u'/dev/sdg', u'/dev/sdh', u'/dev/sdi', u'/dev/sdj']})
on host
# cat /proc/mdstat
Personalities : [raid10] [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4]
md4 : active raid10 sdj[3] sdi[2] sdh[1] sdg[0]
15000268800 blocks super 1.2 512K chunks 2 near-copies [8/8] [UUUUUUUU]
[>....................] resync = 0.2% (44098944/15000268800) finish=1205.9min speed=206695K/sec
bitmap: 112/112 pages [448KB], 65536KB chunk
Is your feature request related to a problem? Please describe.
There are some cases when devices require additional options, f.e. --write-mostly (when disks have asymmetric size or different base disk types, ref. https://raid.wiki.kernel.org/index.php/Write-mostly)
Describe the solution you'd like
Additional flag/options can be passed to device members
Describe alternatives you've considered
No alternative solution yet
Additional context
No additional context
Currently, there is no way to configure chunk size or send opts to mdadm command.
Is there any chance that current version gets released any time soon?
I'm using Ansible galaxy for managing dependencies, and it seem last release accessible thru there was from 2 years ago https://galaxy.ansible.com/ui/standalone/roles/mrlesmithjr/mdadm/ .
P.S. Thank you for your (and contributors) hard work
Hi there,
Is there any way to specify options during array creation, f.e. I wanna specify --write-mostly to one of member of raid 1 ?
Describe the bug
Seems like final step is missing: update-initramfs -u
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
To Reproduce
Steps to reproduce the behavior:
- hosts: all
become: true
vars:
mdadm_arrays:
- name: 'md3'
devices:
- '/dev/sdc'
- '/dev/sdd'
filesystem: 'ext4'
level: '1'
mountpoint: '/data'
state: 'present'
opts: 'noatime'
roles:
- role: mrlesmithjr.mdadm
tasks:
Expected behavior
Server boots.
Actual behavior
Server doesn't boot.
Desktop (please complete the following information):
ansible [core 2.12.1]
config file = None
configured module search path = ['~/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/5.1.0/libexec/lib/python3.10/site-packages/ansible
ansible collection location = ~/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.1 (main, Dec 6 2021, 23:19:43) [Clang 12.0.0 (clang-1200.0.32.29)]
jinja version = 3.0.3
libyaml = True
~/.ansible/roles/mrlesmithjr.mdadm/meta/.galaxy_install_info:version: v0.1.1
Describe the bug
Task fails when trying to destroy array by setting state=absent
To Reproduce
Steps to reproduce the behavior:
vars:
mdadm_arrays:
- name: 'md0'
devices:
- '/dev/sdb'
- '/dev/sdc'
filesystem: 'lvm'
level: 1
state: present
Output from play:
TASK [ansible-mdadm : arrays | Removing Array(s)] **************************************************************************************************************************
failed: [storage.xyz.abcdef.com] (item={'name': 'md0', 'devices': ['/dev/sdb', '/dev/sdc'], 'filesystem': 'lvm', 'level': 1, 'state': 'absent'}) => changed=true
ansible_loop_var: item
cmd:
- mdadm
- --remove
- /dev/md0
delta: '0:00:00.002178'
end: '2022-02-04 12:48:51.857826'
item:
devices:
- /dev/sdb
- /dev/sdc
filesystem: lvm
level: 1
name: md0
state: absent
msg: non-zero return code
rc: 1
start: '2022-02-04 12:48:51.855648'
stderr: 'mdadm: error opening /dev/md0: No such file or directory'
stderr_lines: <omitted>
stdout: ''
stdout_lines: <omitted>
Expected behavior
The playbook should complete without errors
Desktop (please complete the following information):
Additional context
It seems like the MD device is already removed from the system from the task before, "Stopping raid arrays in preparation of destroying". I don't know if the device is removed on systems other than RHEL
Describe the bug
On Rocky 9, the mdadm.conf is being generated with lines similar to:
ARRAY /dev/md/cpt01:0 metadata=1.2 name=cpt01:0 UUID=1c22a1b7:83aea9e8:32ddbd4a:1ff7d63c
ARRAY /dev/md/localhost.localdomain:0 metadata=1.2 name=localhost.localdomain:0
I am setting the name to be md127
in mdadm_arrays.
To Reproduce
Expected behavior
I would expect the config file to look more like:
ARRAY /dev/md/127 metadata=1.2 name=cpt01:0 UUID=1c22a1b7:83aea9e8:32ddbd4a:1ff7d63c
ARRAY /dev/md/localhost.localdomain:0 metadata=1.2 name=localhost.localdomain:0
** Additional context **
There is some context in on in this issue in this bug report: https://bugzilla.redhat.com/show_bug.cgi?id=606481#c1. In particular this comment:
For version 1.2 superblocks, the preferred way to create arrays is by using a name instead of a number. For example, if the array is your home partition, then creating the array with the option --name=home will cause the array to be assembled with a random device number (which is what you are seeing now, when an array doesn't have an assigned number we start at 127 and count backwards), but there will be a symlink in /dev/md/ that points to whatever number was used to assemble the array. The symlink in /dev/md/ will be whatever is in the name field of the superblock. So in this example, you would have /dev/md/home that would point to /dev/md127 and the preferred method of use would be to access the device via the /dev/md/home entry.
I found that using the scan with the name set in the array generated it correctly:
- hosts: compute-regular-gen2
gather_facts: false
tasks:
# Capture the raid array details to append to mdadm.conf
# in order to persist between reboots
- name: arrays | Capturing Array Details
command: "mdadm --detail --scan /dev/{{ item.name }}"
register: "array_details"
changed_when: false
with_items: "{{ mdadm_arrays }}"
become: true
- name: arrays | Updating /etc/mdadm.conf
lineinfile:
dest: /etc/mdadm.conf
regexp: "^{{ item }}"
line: "{{ item }}"
state: "present"
with_items: "{{ array_details.results | map(attribute='stdout_lines') | flatten }}"
become: true
I think that needs work as essentially we want to match on the UUID to replace an existing entry, but putting it out there to solicit some feedback.
I'm wondering if we could support passing --name to create
. Interested to here what you think and if we could do that in a backwards compatible way to support something like:
mdadm_arrays:
- name: 'secondary'
# Define disk devices to assign to array
devices:
- '/dev/sdb'
- '/dev/sdc'
# Define filesystem to partition array with
filesystem: 'ext4'
# Define the array raid level
# 0|1|4|5|6|10
level: '1'
# Define mountpoint for array device
mountpoint: '/mnt/md0'
# Define if array should be present or absent
state: 'present'
From reading: https://bugzilla.redhat.com/show_bug.cgi?id=606481#c1, this is seems to the recommend way to address a particular raid array.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.