securitytxt / security-txt Goto Github PK
View Code? Open in Web Editor NEWA proposed standard that allows websites to define security policies.
Home Page: https://securitytxt.org
License: Other
A proposed standard that allows websites to define security policies.
Home Page: https://securitytxt.org
License: Other
Some programs encourage users to use test accounts instead of real accounts, see for example in Facebook:
https://www.facebook.com/whitehat/
Use test accounts when investigating issues. If you cannot reproduce an issue with a test account, you can use a real account (except for automated testing). Do not interact with other accounts without consent (e.g. do not test against Mark Zuckerberg’s account).
Hi,
Thanks for the great initiative, keep up the awesome work.
Unless I am missing something, I think that there is some ambiguity in the spec - namely which directives are required. Hopefully this can be clarified better in the spec.
I see that Contact
is required, as there needs to be a way to contact the company, but the other directives are more unclear. I assume the intent is that the other fields are not required as it is unspecified whether or not they are, but this can be clarified with a simple sentence. For example: The only directive that MUST be present in the security.txt file is Contact, the other fields are optional
.
Any reason Disclosure
is currently not required? I'm not going to presume to know much about the security research field, but I'll wager it is quite important for a security researcher to know before investing time into the application.
Could/should Acknowledgement
be required if a "None" value is added for the case when there is no acknowledgment?
Thanks again for this awesome draft!
I'd suggest making a feature to disallow automated testing (burp, owasp tools, scanners, etc). Maybe:
Rate-limit: 0
? :)
Currently the draft reads:
contact-field = "Contact" fs SP (email / uri / phone)
email = <Email address as per [RFC5322]>
phone = "+" *1(DIGIT / "-" / "(" / ")" / SP)
uri = <URI as per [RFC3986]>
URI already includes both email and telephone, but with either "mailto" or "tel" prefixes. Should be take our the "email" and "phone" from here, and just rely on URI? Perhaps with examples?
# Our security policy
Encryption: https://example.com/security-policy.html
https://tools.ietf.org/html/draft-foudil-securitytxt-02#section-2.8
Organizations running bug bounty programs often have vendor- or partner- owned properties that are not in scope of such programs for organizational/legal reasons. It's not trivial for the bughunters to identify such properties - sometimes they are branded as the main company, hosted in their IP space, and/or share the DNS domain space. A common example is e.g. a vendor-operated ticketing system, webmail system, or a marketing site hosted in a 3rd party. See this for a better description of this issue.
While it's not trivial to enumerate such sites for large organizations, it might be feasible to use the security.txt
field to express that a given property is not "covered" by a bug bounty/VRP i.e. the owner of the application does not grant permissions for unauthorized security testing it. Something like Get-Off-My-Lawn: please
setting (of course the naming would need to change).
This is much easier to manage, as the domain names for various partner-owned services change constantly, and usually without sync with the teams managing BB/VRP (after all, they're not covered by such programs in the first place).
Note that this is different situation from just not exposing a security.txt
file. Security contact might exist for the web property, but there's no upfront permission given for the testing, for whatever reason.
This would help the companies running large BB/VRP programs annotate such properties, and bughunters would be able to confirm the scope before starting their tests.
To have more assurance of accurate/trusted information of the PGP key, adding below fields will be useful:
I like all ideas that improve security and allow for easier coordination between security researchers, but adding a contact e-mail address in that file is a sure way to get bots to collect it, then start spamming for security tools.
A good system/network administrator shouldn't have any problems finding the right e-mail address through other, slightly harder-to-scrape ways.
Just my 2 cents...
A lot of metadata files for things similar to security.txt are commonly served from /.well-known/. IANA provide the Well Known URI registry for this purpose:
https://tools.ietf.org/html/rfc5785#section-5.1
https://www.iana.org/assignments/well-known-uris/well-known-uris.xhtml
It might be useful to get an assignment from IANA for serving security.txt
It would be great if pasting the https://securitytxt.org/ URL to social media (like Slack chat) would automatically result in extra link preview info unfurl with a few lines of text and maybe even a pic.
Setting expectations around rewards is important to avoid confusion down the disclosure process. By providing the environmental scores (correcting scores for Confidentiality, Integrity, and Availability) help correct the CVSS per asset. Some assets simply don't have access to any confidential information or can be offline for no reason without impact anything. This should be allowed to be reflected not to set the correct expectations, but also direct hackers to the interesting assets that they can hack.
This should be an optional field, as some attack surfaces are too big or unknown to correctly assess the environmental scores.
With the current format it might be slightly harder to add this field, but it could look something like this:
In-Scope: gratipay.com (CVSS:3.0/CR:L/IR:L/AR:L)
Hi,
just published a small package for exposing security.txt for Node.js apps. If you'd like, I would be happy to move it into this org!
You can find the package here: https://github.com/gergelyke/express-security.txt
Let me know what you think! :)
Best,
Gergely
This security.txt file has the potential to standardize many other reporting methods beyond security vulnerabilities to the site. For example:
Whistleblower: [url] # for submitting tips or concerns
LEO: [url] # where law enforcement should go for information
NCMEC [url] # for information related to NCMEC reports or requests. This includes requests from law enforcement. (In the US, every service provider must be registered with the National Center for Missing and Exploited Children since it is a federal requirement to report child abuse. Service providers are mandatory reporters.)
Legal: [url] # for other legal requests; things to direct to the legal department
All of these are different types of "security" issues, even if they are not related to vulnerability disclosure.
Programs have different disclosure policies, some of the aspects can be inserted in disclosure field:
Several parties have raised concerns around the possibility of an attacker modifying the security.txt file. We may want to expand the security consideration section of the draft to address this.
Since some sites does not use a bug bounty platform, there should be a possibility to refer to a PGP-pubkey to be used when contacting the company. Probably by setting something like:
PGP-key: URL
Will the "Contact:" field support Tor .onion hidden services?
Something like:
Contact: http://randomdomainname.onion/
Some people may want to report vulnerabilities or other information anonymously.
Since multiple "Contact:" are permitted, it should be explicit that a failure for one means to try the next one. I.e., if the .onion URL doesn't resolve (e.g., you're not on Tor), then try the next contact URL.
I expect that here will be a number of people, like me, who will visit looking for an explanation of the relationship between this proposal and RFC 2142. What does this proposal address that RFC 2142 does not? Will this supersede RFC 2142 for security@domain? Etc. These considerations should be an intrinsic part of the draft-foudil-securitytxt submission.
While generating, understanding, and parsing this format is not hard would you consider using JSON as a data format? When converted to a JSON object, it's still easy for a human to read and all computers can naturally digest/generate it.
Because first, on the website, I see this:
And then on GitHub, I saw this:
It wouldn't hurt the legitimacy of the project to have a consistent visual language. Same typographies, designs, color schemes... In everything you produce.
(I'd personally start by killing this)
Example: liberapay/liberapay.com#887
If you really want this to become a standard, then you should definitely consider submitting this and going through the process of making this an RFC.
And you really should change the readme to say that it's a proposal, not a standard.
The Encryption:
directive should allow to reference existing type 61 (OPENPGPKEY
) DNS resource records as well (in conjunction with proper DNSSEC signatures, this should be considered more secure).
Independently of the above (e.g., in cases where both a type 61 resource record as well as a link to a file containing a public key exist and both are associated with the same email address), it would be helpful if the standard explicitly suggested what to do in cases where there are inconsistencies.
The rewards are usually mostly affected by the impact of the vulnerability, not by it's type.
draft-foudil-securitytxt-01.txt states the following:
In order to ensure the authenticty of the security.txt file one
SHOULD use the "Signature:" directive, which allows you to link to an
external signature or to directly include the signature in the file.
External signature files should be named "security.txt.sig" and also
be placed under the /.well-known/ path.
In the next version we should make it very clear that the security.txt.sig file should be served over HTTPS.
A lot of vendors use their Platform page as their Security page or link to it from their Security page. I'd suggest to keep it simple and remove Platform from the RFC. There could be a few examples for the Security-page directive that show it may link to a platform entry. Thoughts?
Since the vendors are in complete control over the security.txt, it'd be good to give some leverage to the hackers. There have been instances in the past where the vendor changed the rules of engagement after the hacker submitted a security vulnerability. To avoid discussion around the rules that applied when the hacker submitted the vulnerability, it'd be good to have some form of versioning in the file itself. This might not be trivial to implement in the file itself because the company is in complete control of the file contents.
One of the ideas could be that a third party introduces a service to cache the current version of a security.txt file. The way it could work is that the service downloads the security.txt file and returns a unique URL that proofs the file contents were on the site at one point. This should be accompanied with a timestamp and could be accompanied with a hash.
I’m wondering about how to verify the authenticity of a policy and what it applies to. Would it be safe to say that the policy only applies to the domain in which it’s hosted? In that case would you need a file for each domain? Can policies in child domains reference and delegate its policy to a parent? Can a parent override a child? I assume a parent itself would not be authoritative for it’s children. For targets that are not in contiguous domain namespace or not web related, how to verify the policy does apply to those targets.
This kind of feels like cookies to me. For the robots.txt we would know the policy applies to the target we’re contacting, the same for SSL certs. Could maybe this be implemented along or in addition to a dns TXT record with a similar goal?
As an example if I had a policy on my site at Attacker.com and said Foo.com mobile app for android is in scope, but Foo.com did not have that in scope, who would be authoritative and why? I’m thinking mostly in the context of using the file for automation purposes.
Just spit balling here.
such as:
Mobile applications,
Desktop applications,
Public open source code (repo on github/gitlab, etc..)
etc...
"Security.txt is the equivalent of robots.txt, but for security issues."
robots.txt is "A Standard for Robot Exclusion" or the "Robots Exclusion Protocol". A robots.txt file sets out the things that a robot/spider is not permitted to do, whereas it feels that this proposal is about setting out things that a security research is permitted to do.
I think it's worth mentioning this, and including an explicit section stating the semantics of an absent security.txt to remove all ambiguity (probably that the absent file should be treated identically to Disallow: *).
In Section 6.1 of the latest draft, security.txt is marked for inclusion in the IETF's Well-Known URI's registry. However, the draft also "hard-codes" a second file, security.txt.sig. That should be included in the registry as well.
Here's a suggested revision:
6.1. Well-Known URIs registry
The "Well-Known URIs" registry should be updated with the following
- additional value (using the template from [RFC5785]):
+ additional values (using the template from [RFC5785]):
URI suffix: security.txt
+ URI suffix: security.txt.sig # add this line
Change controller: IETF
Specification document(s): this document
for example, see what B3nac develop at:
https://github.com/B3nac/security-txt/commit/692e6374682e18cf17dae21da760df311755a680
Add CI integration with something like CircleCI so every commit to the draft branch will automatically regenerate the TXT and HTML versions.
Some examples:
https://github.com/quicwg/base-drafts
https://github.com/httpwg/http-extensions
https://tools.ietf.org/html/draft-foudil-securitytxt-01#section-3.2
3.2. File systems
File systems SHOULD place the security.txt file under the root
directory; e.g. /.security.txt, C:.security.txt.
<CODE BEGINS>
.
├── .security.txt
├── example-directory-1
├── example-directory-2
├── example-directory-3
└── example-file
<CODE ENDS>
¯\_(ツ)_/¯
I reviewed the draft for security.txt and the web site named "securitytxt.org" that generates security.txt files. I think there are many serious issues. These issues include:
I prefer the URL, but that isn't what the draft RFC says.
The draft RFC has a lot of ambiguity.
For example, section 2.3 says that 'Contact:' must exist, but then lists different values that serve different purposes:
Contact: [email protected]
Contact: +1-201-555-0123
Contact: https://example.com/security
This becomes a parsing nightmare. If it is going to do this, then it needs to standardize the types of content:
Contact.email: [email protected]
Contact.phone: +1-201-555-0123
Contact.url: https://example.com/security
I can list multiple contact methods. However, there is also no precedence listed for a preferred contact method.
The easy solution is to say that the precedence is in listed order. In the example, email is preferred, then phone, then URL.
I see room for exploitation as soon as someone builds a parser that harvests these security.txt files.
E.g., I can list 1 million Contact: lines and try to overflow the parser's buffer -- or at least consume all of their memory.
And there's no limit to the length of these fields.
And nothing stops me from adding in my own custom field, like a massive video (DVD-HD, of course) that shows how to submit a bug.
"It's not my fault you can't decode my security.txt file! I'm following the standard!"
Location, location, location...
Nothing says it has to be at "/security.txt".
So I could be a big company with:
https://bigcompany.com/security.txt
https://bigcompany.com/product1/security.txt
https://bigcompany.com/product2/security.txt
https://bigcompany.com/product3/security.txt
https://bigcompany.com/ads/security.txt
https://bigcompany.com/personel/security.txt
...
Or maybe:
https://bigcompany.com/security.txt
https://bigcompany-cdn.com/security.txt
https://security.bigcompany.com/security.txt
Nothing says that these have to be the same file!
If people/bots start looking deeper into the directory structure, then I -- as a user who controls a subdirectory -- can hijack the vulnerability reporting process for a big company that gave me a free account.
The Draft RFC says every line must end with a "\n".
That becomes a problem when Windows users use \r\n.
I enjoy these english-only standards.
"Contact", "Encryption", "Disclosure"... yeah, foreign countries will love this.
Encryption is supposed to be for your PGP key.
But what if I want to use some other type of encryption?
Or what if someday PGP falls out of favor. Then what?
There's no mention of existing standards that specify a security contact.
For example, RFC2142 says that there should be a security@ email address.
DNS hijacking is a very real problem. How can I validate that this security.txt file is legitimate?
Perhaps introduce a digital signature that can be validated via SSL, PGP, or DNS TXT record? (Not SSL, PGP, or DNS used to access the file.)
Edit: Formatting, spelling.
What I in the current proposal works great for a website or even a group of websites, it can even work for shipped products if you have a single point of contact for reporting all issues. But what if the POC is different for these things? For example, it's not uncommon for a company to have one or more websites run by IT with one point of contact for security issues and products that are shipped to customers with a different point of contact for security issues. Perhaps a way to set scopes, this is in scope, this is out of scope and this is the contact for those things that are in scope.
Admittedly, it would be easy enough to indicate that all products are out of scope in the file, but then the natural question for the person trying to use the file is "So, who DO I contact?" And they'll use the contact information in this file anyway, even when it indicates that what they're reporting is out of scope.
Having free-format definitions for vulnerability types will result in people using different naming for the same vulnerability types. This makes it harder to consume for computers. I'd propose to use Common Weakness Enumeration (CWE) for this. This would require the file to define a CWE version number and a list of CWE IDs. It could look something like this:
Out-of-scope-vuln: CAPEC-103 (clickjacking)
Out-of-scope-vuln: CWE-77 (command injection)
Using the parentheses is optional and will be ignored, but adds the ability for humans to interpret the meaning of the CWE without the need to look it up. It could be validated with something like:
([CAPEC|CWE]+-\d+)(\s+?\(.*\))?
it would be great if there was a field for specifying an organization's warrant canary url.
The ".well-known" standard or RFC 5785 is the proposed way to websites to store things like "robots.txt":
https://tools.ietf.org/html/rfc5785
It is currently used for the ACME protocol used by LetsEncrypt for SSL domain verification:
https://tools.ietf.org/html/draft-ietf-acme-acme-07#section-9.2
Other protocols out there that use "/.well-known":
https://www.iana.org/assignments/well-known-uris/well-known-uris.xhtml
Please consider supporting this for "security.txt" as well
Many organisations these days have a public vulnerability disclosure policy. These policies can have a legal status granting researchers rights when they follow the disclosure process described in the policy. For example in the Netherlands organisations state that they will not report researchers to the police or sue them, if they follow the guidelines.
At the moment, security.txt does not include information about disclosure policy. There has been a proposal before (#6), which was initially accepted, but later again removed.
Currently, there does not seem to be a completely standardised disclosure policy, so a machine-interpretable field seems too far fetched. Even the location of a disclosure policy is far from standardised; often websearching "disclosure policy " is easier than trying to navigate the website.
As an alternative, I'd like to propose a "Policy:" field that contains a link to the disclosure policy of that organisation.
If the idea is acceptable, I'd be happy to write a pull request for this field.
I've written a pretty basic parser and cli tool for the existing draft. I'll try and keep it up to date as further revisions are posted, but is there somewhere that clients and tools could be listed?
Great job with this initiative!
It is similar to http://humanstxt.org/ which uses a humans.txt file to list the humans behind the website. It is under-utilized, but here are some examples:
http://humanstxt.org/humans/
https://medium.com/humans.txt
https://www.google.com/humans.txt
http://www.nytimes.com/humans.txt
http://www.netflix.com/humans.txt
https://basecamp.com/humans.txt
https://trello.com/humans.txt
http://www.symantec.com/humans.txt
http://www.gizmodo.com/humans.txt
https://disqus.com/humans.txt ;-)
ETC.
I think your work to make security information more accessible is very important; however, I would like to urge you to consider joining with humanstxt.org and storing that information in humans.txt instead of adding yet another file.
humans.txt is underutilized due to a lack of awareness and perhaps so too will be security.txt. Joining efforts with humanstxt.org to help promote a single .txt file will all relevant contact information (including security) will help to raise awareness instead of dividing efforts between different txt files.
Again, awesome job with this! I hope you consider adopting the name humans.txt instead of security.txt
BountyProgram: https://url1.here
BountyProgram: https://url2.here
Allow none or more items.
What are the valid values for Reward
, Disclosure
?
I mean other fields are pretty unambiguous and even numric part of Reward
and Disclosure
, but which keywords can replace Medium
in Reward
and Full
in Disclosure
?
for example: Reward: Low-50
, Disclosure: Partial-7
, Disclosure: Full-never
.
we need a draft to describe these keywords.
BugBounties have different set of possible payments methods, such as:
CryptoCurrencies directly (BTC,LTC,ETH,XMR,etc...)
CryptoCurrencies via exchange account (such as coinbase account)
PayPal
BankTransfers
etc..
Example of bug bounty that have limited set of payment options is F-Secure:
https://www.f-secure.com/en/web/labs_global/vulnerability-reward-program
Payments are made as bank transfers within the Single Euro Payments Area (SEPA) or international bank (wire) transfers outside the SEPA. We cannot use checks, cryptocurrencies, or use any other money transfer services. The payment recipient is responsible for any charges or fees levied on the transfer, and for accessing the funds once transferred. Payments are by default done in Euros (EUR) and any currency conversions are done at the current bank rate.
It would be awesome if website owners could specify a key, then use an encryption algorithm, say RSA, to encrypt their data. I'd imagine if done right, this could take a while to decrypt, and thus make it difficult for scrapers to abuse this system.
I think this would resolve some of the concerns people have voiced around getting their contact information spammed (for example, https://www.bleepingcomputer.com/news/security/security-txt-standard-proposed-similar-to-robots-txt/#cid6271).
It would be hard for bots to decrypt a ton of slow-decrypting keys, but easy for someone to decrypt just one. I'm not sure if RSA would be the right algorithm for this or not, there may be a better option.
EDIT: An alternative would be making the key a few rounds of bcrypt on the hostname. This would mean bots would need to know the domain, run the bcrypt, and then decrypt the data using the key for each address.
If the standard would allow for ;
instead of/in addition to \n
as a field delimiter, it's easy to alternatively put everything into a (single) DNS TXT resource record (cf. SPF, Sender Policy Framework).
as just a way of helping people if there is going to be a wiki.
if you run a cpanel server and want to have it added to account created on the server you can add security.txt to
/root/cpanel3-skel/public_html/
and it will be copied in to every new account generated.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.