Code Monkey home page Code Monkey logo

Comments (21)

mathew-jithinm avatar mathew-jithinm commented on June 9, 2024 4

Facing the same issue as above from last few hours while trying to pull mcr.microsoft.com/playwright/python:v1.41.0-jammy

from containerregistry.

initrd avatar initrd commented on June 9, 2024 3

We are seeing very slow pulls in AP Southeast/Singapore and India regions. Some layers download quick, whereas others are stuck at a few kBs, some are really slow to download:

#4 [e2e-api stage-0 1/4] FROM mcr.microsoft.com/playwright:v1.40.1-jammy@sha256:1aba528f5db4f4c130653ed1de737ddc1d276197cc4503d3bb7903a93b7fb32e
...
#4 sha256:05f6649df41d0f3c197559f4473b47a6764d3f807d6c10145ab6bb01c722abcb 136.31MB / 555.83MB 3310.9s

from containerregistry.

AndreHamilton-MSFT avatar AndreHamilton-MSFT commented on June 9, 2024 2

@YevheniiSemenko @lai-vson @harrison-seow-develab @alexwilson1 @malcuch @mathew-jithinm @stanhu @initrd we have identified an issue that resulted in slow downloads in a number of regions and mitigations are being applied. Please let me know if you are continuing to see issues and if possible supply the ref header so we can identify any edges with issues remaining.

from containerregistry.

ksacry-ft avatar ksacry-ft commented on June 9, 2024 1

Same issue here, in Utah as well
x-msedge-ref: Ref A: D80E11CB64704E54830D9CD438D06D9A Ref B: SLC31EDGE0207 Ref C: 2024-04-23T15:53:37Z

from containerregistry.

harrison-seow-develab avatar harrison-seow-develab commented on June 9, 2024 1

Private Runners (GitLab CI) hosted on ap-southeast-1 AWS seems to be affected as well

from containerregistry.

AndreHamilton-MSFT avatar AndreHamilton-MSFT commented on June 9, 2024

@frnkeddy can you share what region you are in. Is this still ongoing? We are currently investigating some issues impacting azure jio india west. We are currently working on mitigation

from containerregistry.

frnkeddy avatar frnkeddy commented on June 9, 2024

from containerregistry.

AndreHamilton-MSFT avatar AndreHamilton-MSFT commented on June 9, 2024

@frnkeddy Can you visit https://mcr.microsoft.com/ in your browser/curl and share the value of response header "X-Msedge-Ref". This will help us identify where this might be occurring

from containerregistry.

mattkruskamp avatar mattkruskamp commented on June 9, 2024

+1 To that issue. Also in Utah.
X-MSEdge-Ref: Ref A: C1D956FCEDB247EFA0991CEFDEC104D5 Ref B: SLC31EDGE0218 Ref C: 2024-04-23T16:12:53Z

Same error when trying to docker pull mcr.microsoft.com/azure-storage/azurite:latest and mcr.microsoft.com/mssql/server:2022-latest

from containerregistry.

AndreHamilton-MSFT avatar AndreHamilton-MSFT commented on June 9, 2024

Great. This is helpful. Will follow up once i know more

from containerregistry.

frnkeddy avatar frnkeddy commented on June 9, 2024

I'm getting the same/similar curl response as the others using curl -i https://mcr.microsoft.com

X-MSEdge-Ref: Ref A: A9DCA542B0684B48A4E6838BE89C8983 Ref B: SLC31EDGE0117 Ref C: 2024-04-24T00:34:23Z

If it helps to I know the problem started between April 22, 2024 at 7:56:50 AM MDT and April 22, 2024 at 5:42:33 PM MDT as reported by a CI/CD pipeline I use.

Additional curl results include (note the difference in the URLs):

curl -i https://mcr.microsoft.com/dotnet

X-MSEdge-Ref: Ref A: 9382DD583636410EA6E4F10B557C3CA6 Ref B: SLC31EDGE0211 Ref C: 2024-04-24T00:47:06Z

curl -i https://mcr.microsoft.com/dotnet/sdk

curl: (60) schannel: SNI or certificate check failed: SEC_E_WRONG_PRINCIPAL (0x80090322) - The target principal name is incorrect.

from containerregistry.

frnkeddy avatar frnkeddy commented on June 9, 2024

One more bit of insight: This docker pull attempt failed part way through:
docker pull mcr.microsoft.com/dotnet/aspnet:8.0-jammy-amd64

8.0-jammy-amd64: Pulling from dotnet/aspnet
e311a697a403: Retrying in 1 second
154eb062a695: Retrying in 1 second
81af5a508103: Retrying in 1 second
b646c0a58c82: Waiting
1254901aed19: Waiting
d16bfc8b4664: Waiting
error pulling image configuration: download failed after attempts=6: tls: failed to verify certificate: x509: certificate is valid for *.azureedge.net, *.media.microsoftstream.com, *.origin.mediaservices.windows.net, *.streaming.mediaservices.windows.net, not westcentralus.data.mcr.microsoft.com

from containerregistry.

stanhu avatar stanhu commented on June 9, 2024

From a host in Google's us-east1-d, I'm seeing this issue:

$ docker pull mcr.microsoft.com/dotnet/sdk:6.0.400-1-focal
6.0.400-1-focal: Pulling from dotnet/sdk
675920708c8b: Pulling fs layer
63c1e812e3e8: Pulling fs layer
efc4bd123130: Pulling fs layer
459ef695deeb: Waiting
c774e78dcdb2: Waiting
9cc80820d7f5: Waiting
c3d985ec3b5b: Waiting
b3fa791bf5d1: Waiting

It looks like this edge is hitting ATL?

$ curl -s -i "https://mcr.microsoft.com/" | grep Edge
X-MSEdge-Ref: Ref A: 7B12140D720C4039A7A5A181F8E13D7A Ref B: ATL331000108037 Ref C: 2024-04-24T04:40:56Z

Docker debug logs show:

# journalctl -f -u docker.service
Apr 24 02:12:12 stanhu-test1 dockerd[1929]: time="2024-04-24T02:12:12.035922053Z" level=debug msg="Calling HEAD /_ping"
Apr 24 02:12:12 stanhu-test1 dockerd[1929]: time="2024-04-24T02:12:12.038468358Z" level=debug msg="Calling POST /v1.40/images/create?fromImage=mcr.microsoft.com%2Fdotnet%2Fsdk&tag=6.0.400-1-focal"
Apr 24 02:12:12 stanhu-test1 dockerd[1929]: time="2024-04-24T02:12:12.040890542Z" level=debug msg="hostDir: /etc/docker/certs.d/mcr.microsoft.com"
Apr 24 02:12:12 stanhu-test1 dockerd[1929]: time="2024-04-24T02:12:12.040997223Z" level=debug msg="Trying to pull mcr.microsoft.com/dotnet/sdk from https://mcr.microsoft.com v2"
Apr 24 02:12:12 stanhu-test1 dockerd[1929]: time="2024-04-24T02:12:12.251159340Z" level=debug msg="Pulling ref from V2 registry: mcr.microsoft.com/dotnet/sdk:6.0.400-1-focal"
Apr 24 02:12:12 stanhu-test1 dockerd[1929]: time="2024-04-24T02:12:12.251210871Z" level=debug msg="mcr.microsoft.com/dotnet/sdk:6.0.400-1-focal resolved to a manifestList object with 3 entries; looking for a unknown/amd64 match"
Apr 24 02:12:12 stanhu-test1 dockerd[1929]: time="2024-04-24T02:12:12.251236571Z" level=debug msg="found match for linux/amd64 with media type application/vnd.docker.distribution.manifest.v2+json, digest sha256:0d329a3ebef503348f6c289ff72871c92a9cc4fbfa4f50663bd42ab56587b998"
Apr 24 02:12:12 stanhu-test1 dockerd[1929]: time="2024-04-24T02:12:12.367795041Z" level=debug msg="pulling blob \"sha256:675920708c8bf10fbd02693dc8f43ee7dbe0a99cdfd55e06e6f1a8b43fd08e3f\""
Apr 24 02:12:12 stanhu-test1 dockerd[1929]: time="2024-04-24T02:12:12.368064042Z" level=debug msg="pulling blob \"sha256:63c1e812e3e8c944ddbe6e9ed940d8cb71208d4f7a1d6555e8cd255a764b67a7\""
Apr 24 02:12:12 stanhu-test1 dockerd[1929]: time="2024-04-24T02:12:12.368224202Z" level=debug msg="pulling blob \"sha256:efc4bd1231305956bc5ff57e1eda1d3bbe5cdaedb98332020c6c20c6a1933c8a\""
Apr 24 02:15:35 stanhu-test1 dockerd[1929]: time="2024-04-24T02:15:35.866584258Z" level=error msg="Download failed, retrying: read tcp 10.10.240.17:41660->204.79.197.219:443: read: connection reset by peer"
Apr 24 02:15:35 stanhu-test1 dockerd[1929]: time="2024-04-24T02:15:35.866655378Z" level=error msg="Download failed, retrying: read tcp 10.10.240.17:41666->204.79.197.219:443: read: connection reset by peer"
Apr 24 02:15:36 stanhu-test1 dockerd[1929]: time="2024-04-24T02:15:36.750710558Z" level=error msg="Download failed, retrying: read tcp 10.10.240.17:41658->204.79.197.219:443: read: connection reset by peer"
Apr 24 02:15:40 stanhu-test1 dockerd[1929]: time="2024-04-24T02:15:40.866874643Z" level=debug msg="pulling blob \"sha256:efc4bd1231305956bc5ff57e1eda1d3bbe5cdaedb98332020c6c20c6a1933c8a\""
Apr 24 02:15:40 stanhu-test1 dockerd[1929]: time="2024-04-24T02:15:40.866958292Z" level=debug msg="attempting to resume download of \"sha256:efc4bd1231305956bc5ff57e1eda1d3bbe5cdaedb98332020c6c20c6a1933c8a\" from 64552 bytes"
Apr 24 02:15:40 stanhu-test1 dockerd[1929]: time="2024-04-24T02:15:40.866880192Z" level=debug msg="pulling blob \"sha256:675920708c8bf10fbd02693dc8f43ee7dbe0a99cdfd55e06e6f1a8b43fd08e3f\""
Apr 24 02:15:40 stanhu-test1 dockerd[1929]: time="2024-04-24T02:15:40.867387843Z" level=debug msg="attempting to resume download of \"sha256:675920708c8bf10fbd02693dc8f43ee7dbe0a99cdfd55e06e6f1a8b43fd08e3f\" from 64552 bytes"
Apr 24 02:15:41 stanhu-test1 dockerd[1929]: time="2024-04-24T02:15:41.751074262Z" level=debug msg="pulling blob \"sha256:63c1e812e3e8c944ddbe6e9ed940d8cb71208d4f7a1d6555e8cd255a764b67a7\""
Apr 24 02:15:41 stanhu-test1 dockerd[1929]: time="2024-04-24T02:15:41.751139092Z" level=debug msg="attempting to resume download of \"sha256:63c1e812e3e8c944ddbe6e9ed940d8cb71208d4f7a1d6555e8cd255a764b67a7\" from 64552 bytes"

However, with a Google Cloud VM in us-central1-f it works fine. The edge looks like CHI:

$ curl -s -i "https://mcr.microsoft.com/" | grep -i edge
x-msedge-ref: Ref A: FDD4986B6BE344BE87CDC4F04CB927CD Ref B: CHI30EDGE0206 Ref C: 2024-04-24T04:42:27Z

from containerregistry.

malcuch avatar malcuch commented on June 9, 2024

Same here, pulling is extremely slow, only a few kilobytes/s in my case for mcr.microsoft.com/dotnet/sdk:8.0. It downloads images for over 20 minutes and still less than 50% is completed. The same issue is on our cloud CI (GitLab), which confirms that it is not local network issue.

from containerregistry.

alexwilson1 avatar alexwilson1 commented on June 9, 2024

Same issue for me with mcr.microsoft.com/devcontainers/typescript-node in Washington (West US). Extremely slow.

x-msedge-ref: Ref A: 8336FC7B9F5542C3A96085A826C4F40F Ref B: STBEDGE0115 Ref C: 2024-04-24T08:38:31Z

If I try and load https://mcr.microsoft.com/ in Chrome I get:
ERR_HTTP2_PROTOCOL_ERROR

And the page doesn't even load

Edit:
Everything works when I use a VPN in Iceland

from containerregistry.

YevheniiSemenko avatar YevheniiSemenko commented on June 9, 2024

same here US, Virginia (locally)
x-msedge-ref: Ref A: DDF6D4F14C74424EA0A5B5A75422F3D6 Ref B: WAW01EDGE0706 Ref C: 2024-04-24T09:43:10Z

and cloud environment (gcp, europe-west1-d)
< x-msedge-ref: Ref A: B79AFFC6A0004F1385B8F1E264050AEF Ref B: LTSEDGE1520 Ref C: 2024-04-24T09:47:21Z

from containerregistry.

lai-vson avatar lai-vson commented on June 9, 2024

Gitlab hosted runner get affected as well.

from containerregistry.

AndreHamilton-MSFT avatar AndreHamilton-MSFT commented on June 9, 2024

I'm getting the same/similar curl response as the others using curl -i https://mcr.microsoft.com

X-MSEdge-Ref: Ref A: A9DCA542B0684B48A4E6838BE89C8983 Ref B: SLC31EDGE0117 Ref C: 2024-04-24T00:34:23Z

If it helps to I know the problem started between April 22, 2024 at 7:56:50 AM MDT and April 22, 2024 at 5:42:33 PM MDT as reported by a CI/CD pipeline I use.

Additional curl results include (note the difference in the URLs):

curl -i https://mcr.microsoft.com/dotnet

X-MSEdge-Ref: Ref A: 9382DD583636410EA6E4F10B557C3CA6 Ref B: SLC31EDGE0211 Ref C: 2024-04-24T00:47:06Z

curl -i https://mcr.microsoft.com/dotnet/sdk

curl: (60) schannel: SNI or certificate check failed: SEC_E_WRONG_PRINCIPAL (0x80090322) - The target principal name is incorrect.

We are actively investigating this issue. will give a follow up in a bit

from containerregistry.

AndreHamilton-MSFT avatar AndreHamilton-MSFT commented on June 9, 2024

We are seeing very slow pulls in AP Southeast/Singapore and India regions. Some layers download quick, whereas others are stuck at a few kBs, some are really slow to download:

#4 [e2e-api stage-0 1/4] FROM mcr.microsoft.com/playwright:v1.40.1-jammy@sha256:1aba528f5db4f4c130653ed1de737ddc1d276197cc4503d3bb7903a93b7fb32e
...
#4 sha256:05f6649df41d0f3c197559f4473b47a6764d3f807d6c10145ab6bb01c722abcb 136.31MB / 555.83MB 3310.9s

Is this still occurring. We had an issue jio india west related to slow downloads that was recently mitigated. If you are still seeing slow downloads can you provide a X-MSEdge-Ref for us. it would assist in narrowing down yourspecific issue

from containerregistry.

AndreHamilton-MSFT avatar AndreHamilton-MSFT commented on June 9, 2024

Is anyone in utah still experiencing the invalid ceritifcates. Some mitigations were applied in that region. Please let me know if you are unblocked @frnkeddy @ksacry-ft @mattkruskamp . The other issues on slowness are probably unrelated. Will update once we isolate that specific issue

from containerregistry.

frnkeddy avatar frnkeddy commented on June 9, 2024

I've confirmed that the issue has been resolved for me in Utah. All of my systems are now able to pull images successfully.

Thank you for working and fixing the issue.

from containerregistry.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.