Code Monkey home page Code Monkey logo

perl5's Introduction

For support, please post in the new Exercism forum. New posts here will be closed.


Welcome to Exercism

Where to open issues

For the time being we are triaging all issues from our forum. Please start a new topic there for your issue (presuming there isn't one already). Issues opened here will be automatically closed and you will receive a message redirecting you to the forum.

Feeling uncomfortable?

If you need to report a code of conduct violation, please email us at [email protected] and include [CoC] in the subject line. We will follow up with you as a priority.

Where to find the code

The code for the website lives in exercism/website. The code for the old website is in this repository, in the v1.exercism.io branch.

Who's behind Exercism?

Read about our Team on the site: https://exercism.org/team

perl5's People

Contributors

alexkalderimis avatar austinlyons avatar autark avatar bentglasstube avatar bethanyg avatar bistik avatar catb0t avatar cgrayson avatar choroba avatar cite-reader avatar cxw42 avatar darksuji avatar dkinzer avatar drewbs avatar ee7 avatar erikschierboom avatar exercism-bot avatar glennj avatar kappa avatar kentfredric avatar kotp avatar kytrinyx avatar m-dango avatar monsenhor avatar petertseng avatar rfilipo avatar sshine avatar stonemirror avatar szabgab avatar yanick avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

perl5's Issues

Consider re-ordering exercises in config.json

Right now it looks like the exercises have been reordered to be alphabetical, which isn't necessarily the most useful progression.

I'd recommend trying to make a rough guess at which exercises are easier and move them towards the front of the list, and which ones are harder and move them to the end of the list.

How to set up a local dev environment

See issue exercism/exercism#2092 for an overview of operation welcome contributors.


Provide instructions on how to contribute patches to the exercism test suites
and examples: dependencies, running the tests, what gets tested on Travis-CI,
etc.

The contributing document
in the x-api repository describes how all the language tracks are put
together, as well as details about the common metadata, and high-level
information about contributing to existing problems, or adding new problems.

The README here should be language-specific, and can point to the contributing
guide for more context.

From the OpenHatch guide:

Here are common elements of setting up a development environment you’ll want your guide to address:

Preparing their computer
Make sure they’re familiar with their operating system’s tools, such as the terminal/command prompt. You can do this by linking to a tutorial and asking contributors to make sure they understand it. There are usually great tutorials already out there - OpenHatch’s command line tutorial can be found here.
If contributors need to set up a virtual environment, access a virtual machine, or download a specific development kit, give them instructions on how to do so.
List any dependencies needed to run your project, and how to install them. If there are good installation guides for those dependencies, link to them.

Downloading the source
Give detailed instructions on how to download the source of the project, including common missteps or obstacles.

How to view/test changes
Give instructions on how to view and test the changes they’ve made. This may vary depending on what they’ve changed, but do your best to cover common changes. This can be as simple as viewing an html document in a browser, but may be more complicated.

Installation will often differ depending on the operating system of the contributor. You will probably need to create separate instructions in various parts of your guide for Windows, Mac and Linux users. If you only want to support development on a single operating system, make sure that is clear to users, ideally in the top-level documentation.

What Perl version should exercises target?

There's no explicit requirement in the existing code, but I feel that we should document the versions that the provided tests are expected to run on. I don't favor writing extremely backwards-compatible code; I'd rather that the tests be fairly decent models of what Perl should look like. To that end, I'd like to say that the test scripts don't need to run on Perls before 5.10.0. If they do, great, but don't make me write my $y = defined $x ? $x : '';. I think that 5.10 gives us the best balance.

But I'm only one (opinionated) guy, so if there's other opinions on the subject I'd like us to discuss them. If it so happens that no other preferences emerge, I'll update the README to reflect this expectation.

Where are the Perl 5 communities and enthusiasts?

As we move towards the launch of the new version of Exercism we are going to be ramping up on actively recruiting people to help provide feedback.

Our goal is to get to 100%: everyone who submits a solution and wants feedback should get feedback. Good feedback. You can read more about this aspect of the new site here: http://mentoring.exercism.io/

To do this, we're going to need a lot more information about where we can find language enthusiasts.

  • Is Perl 5 supported by one or more large organizations?
  • Does Perl 5 have an official community manager?
  • Do you know of specific communities (online or offline) that are enthusiastic about Perl 5? (Chat communities, forums, meetups, student clubs, etc)
  • Are there popular conferences for Perl 5? (If so, what are some examples?)
  • Are there any organizations who are targeted specifically at getting certain subgroups or demographics interested in Perl 5? (e.g. kids, teenagers, career changers, people belonging to various groups that are typically underrepresented in tech?)
  • Are there specific groups or programs dedicated to mentoring people in Perl 5?
  • Are there popular newsletters for Perl 5?
  • Is Perl 5 taught at programming bootcamps? (If so, what are some examples?)
  • Is Perl 5 taught at universities? (If so, what are some examples?)

In other words: where do people care a lot and/or know a lot about Perl 5?

This is part of the project being tracked in exercism/meta#103

Implement Perl 5 exercises

Copied from exercism/exercism#1027

  • accumulate
  • allergies
  • anagram
  • atbash-cipher
  • bob
  • linked-list
  • prime-factors
  • queen-attack
  • raindrops
  • rna-transcription
  • triangle
  • word-count
  • wordy
  • beer-song
  • clock
  • crypto-square
  • nucleotide-count
  • point-mutations
  • pig-latin
  • phone-number
  • grade-school
  • robot-name
  • etl
  • leap
  • space-age
  • grains
  • hexadecimal
  • gigasecond
  • scrabble-score
  • roman-numerals
  • binary
  • kindergarten-garden
  • pascals-triangle
  • proverb
  • say
  • simple-cipher
  • trinary

Remove over-emphasis on object-orientation

Many of the exercises require an object to be created from a single value (often a number or string) only to call one method on it. This ends up feeling very Rubyish, which makes sense, but there is no need for objects in most of these cases, and a simple functional API would make the most sense.

Verify contents and format of track documentation

Each language track has documentation in the docs/ directory, which gets included on the site
on each track-specific set of pages under /languages.

We've added some general guidelines about how we'd like the track to be documented in exercism/exercism#3315
which can be found at https://github.com/exercism/exercism.io/blob/master/docs/writing-track-documentation.md

Please take a moment to look through the documentation about documentation, and make sure that
the track is following these guidelines. Pay particularly close attention to how to use images
in the markdown files.

Lastly, if you find that the guidelines are confusing or missing important details, then a pull request
would be greatly appreciated.

Update config.json to match new specification

For the past three years, the ordering of exercises has been done based on gut feelings and wild guesses. As a result, the progression of the exercises has been somewhat haphazard.

In the past few months maintainers of several tracks have invested a great deal of time in analyzing what concepts various exercises require, and then reordering the tracks as a result of that analysis.

It would be useful to bake this data into the track configuration so that we can adjust it over time as we learn more about each exercise.

To this end, we've decided to add a new key exercises in the config.json file, and deprecate the problems key.

See exercism/discussions#60 for details about this decision.

Note that we will not be removing the problems key at this time, as this would break the website and a number of tools.

The process for deprecating the old problems array will be:

  • Update all of the track configs to contain the new exercises key, with whatever data we have.
  • Simultaneously change the website and tools to support both formats.
  • Once all of the tracks have added the exercises key, remove support for the old key in the site and tools.
  • Remove the old key from all of the track configs.

In the new format, each exercise is a JSON object with three properties:

  • slug: the identifier of the exercise
  • difficulty: a number from 1 to 10 where 1 is the easiest and 10 is the most difficult
  • topics: an array of strings describing topics relevant to the exercise. We maintain
    a list of common topics at https://github.com/exercism/x-common/blob/master/TOPICS.txt. Do not feel like you need to restrict yourself to this list;
    it's only there so that we don't end up with 20 variations on the same topic. Each
    language is different, and there will likely be topics specific to each language that will
    not make it onto the list.

The difficulty rating can be a very rough estimate.

The topics array can be empty if this analysis has not yet been done.

Example:

"exercises": [
  {
    "slug": "hello-world" ,
    "difficulty": 1,
    "topics": [
        "control-flow (if-statements)",
        "optional values",
        "text formatting"
    ]
  },
  {
    "difficulty": 3,
    "slug": "anagram",
    "topics": [
        "strings",
        "filtering"
    ]
  },
  {
    "difficulty": 10,
    "slug": "forth",
    "topics": [
        "parsing",
        "transforming",
        "stacks"
    ]
  }
]

It may be worth making the change in several passes:

  1. Add the exercises key with the array of objects, where difficulty is 1 and topics is empty.
  2. Update the difficulty settings to reflect a more accurate guess.
  3. Add topics (perhaps one-by-one, in separate pull requests, in order to have useful discussions about each exercise).

Bob is inconsistent with other languages

Posted by @meantime in exercism.io:

The instructions for the Bob assignment say:

He answers 'Whoa, chill out!' if you yell at him.
However, some of the language test cases check for "Woah" (note that the 'h' has moved to the end)

The difference exists in Objective-C, PERL5, OCaml, Elixir, and Clojure

Name nucleobases, not nucleosides

The primary nucleobases are cytosine (DNA and RNA), guanine (DNA and RNA), adenine (DNA and RNA), thymine (DNA) and uracil (RNA), abbreviated as C, G, A, T, and U, respectively. Because A, G, C, and T appear in the DNA, these molecules are called DNA-bases; A, G, C, and U are called RNA-bases. - Wikipedia

In other words, we should rename the values in the RNA transcription problem to reflect the following:

  • cytidine -> cytosine
  • guanosine -> guanine
  • adenosine -> adenine
  • thymidine -> thymine
  • uridine -> uracil

Updated tests for the Custom Set problem

In order to reduce the amount of code required to pass incremental tests (assuming that users pass tests starting from the top), the order of the tests was modified slightly.

Since this track implements Custom Set, please take a look at the new custom-set.json file and see if your track should update its tests.

If you do need to update your tests, please refer to this issue in your PR. That helps us see which tracks still need to update their tests.

If your track is already up to date, go ahead and close this issue.

More details on this change are available in exercism/problem-specifications#257.

Update installation docs

Today in the exercism/support gitter there was an error message reported about Test/More.pm not found.

He used a minimal and plain centos:7 inside a Docker-container. He did not install anything further, because documentation of the track states that everything should be fine on linux.

Further research on this topic discovered, that in fact, perl itself is preinstalled on CentOS, because its needed internally. Perhaps even because it were needed if one wants to rebuild the Kernel?

Anyway, Test::More is not bundled with perl in Fedora and CentOS, one needs to yum install perl-Test-Simple there. On other RPM based distributions Test::More seems to be included in perl, see rpmfind: https://www.rpmfind.net/linux/rpm2html/search.php?query=perl(Test%3A%3AMore)

Nevertheless, as we have found at least 1 major distributions where the current docs of "no need to install anything" do not hold, I think they should be re-evaluated and the majors should be written out explicitely, ala:

  • CentOS and Fedora: install perl-Test-Simple additionally
  • Ubuntu and Debian: no need to install anything further (This is an unverified assumption to make explicitness of the table clear)

Investigate track health and status of the track

I've used Sarah Sharp's FOSS Heartbeat project to generate stats for each of the language track repositories, as well as the x-common repository.

The Exercism heartbeat data is published here: https://exercism.github.io/heartbeat/

When looking at the data, please disregard any activity from me (kytrinyx), as I would like to get the language tracks to a point where they are entirely maintained by the community.

Please take a look at the heartbeat data for this track, and answer the following questions:

  • To what degree is the track maintained?
  • Who (if anyone) is merging pull requests?
  • Who (if anyone) is reviewing pull requests?
  • Is there someone who is not merging pull requests, but who comments on issues and pull requests, has thoughtful feedback, and is generally helpful? If so, maybe we can invite them to be a maintainer on the track.

I've made up the following scale:

  • ORPHANED - Nobody (other than me) has merged anything in the past year.
  • ENDANGERED - Somewhere between ORPHANED and AT RISK.
  • AT RISK - Two people (other than me) are actively discussing issues and reviewing and merging pull requests.
  • MAINTAINED - Three or more people (other than me) are actively discussing issues and reviewing and merging pull requests.

It would also be useful to know if there a lot of activity on the track, or just the occasional issue or comment.

Please report the current status of the track, including your best guess on the above scale, back to the top-level issue in the discussions repository: exercism/discussions#97

Make Hamming conform to official definition

From issue exercism/exercism#1867

Wikipedia says the Hamming distance is not defined for strings of different length.

I am not saying the problems cannot be different, but for such a well-defined concept it would make sense to stick to one definition, especially when the READMEs provide so little information about what is expected from the implementation.

Let's clean this up so that we're using the official definition.

bob: Update to clarify ambiguity regarding shouted questions

TL;DR: the problem specification for the Bob exercise has been updated. Consider updating the test suite for Bob to match. If you decide not to update the exercise, consider overriding description.md.


Details

The problem description for the Bob exercise lists four conditions:

  • asking a question
  • shouting
  • remaining silent
  • anything else

There's an ambiguity, however, for shouted questions: should they receive the "asking" response or the "shouting" response?

In exercism/problem-specifications#1025 this ambiguity was resolved by adding an additional rule for shouted questions.

If this track uses exercise generators to update test suites based on the canonical-data.json file from problem-specifications, then now would be a good time to regenerate 'bob'. If not, then it will require a manual update to the test case with input "WHAT THE HELL WERE YOU THINKING?".

See the most recent canonical-data.json file for the exact changes.

Remember to regenerate the exercise README after updating the test suite:

configlet generate . --only=bob --spec-path=<path to your local copy of the problem-specifications repository>

You can download the most recent configlet at https://github.com/exercism/configlet/releases/latest if you don't have it.

If, as track maintainers, you decide that you don't want to change the exercise, then please consider copying problem-specifications/exercises/bob/description.md into this track, putting it in exercises/bob/.meta/description.md and updating the description to match the current implementation. This will let us run the configlet README generation without having to worry about the bob README drifting from the implementation.

Pass explicit list of multiples in "Sum of Multiples" exercise rather than defaulting to 3 and 5

Hello, as part of exercism/problem-specifications#198 we'd like to make the sum of multiples exercise less confusing. Currently, the README specifies that if no multiples are given it should default to 3 and 5.

We'd like to remove this default, so that a list of multiples will always be specified by the caller. This makes the behavior explicit, avoiding surprising behavior and simplifying the problem.

Please make sure this track's tests for the sum-of-multiples problem do not expect such a default. Any tests that want to test behavior for multiples of [3, 5] should explicitly pass [3, 5] as the list of multiples.

After all tracks have completed this change, then exercism/problem-specifications#209 can be merged to remove the defaults from the README.

The reason we'd like this change to happen before changing the README is that it was very confusing for students to figure out the default behavior. It wasn't clear from simply looking at the tests that the default should be 3 and 5, as seen in exercism/exercism#2654, so some had to resort to looking at the example solutions (which aren't served by exercism fetch, so they have to find it on GitHub). It was added to the README to fix this confusion, but now we'd like to be explicit so we can remove the default line from the README.

You can find the common test data at https://github.com/exercism/x-common/blob/master/sum-of-multiples.json, in case that is helpful.

Verify "Largest Series Product" exercise implementation

There was some confusion in this exercise due to the ambiguous use of the term consecutive in the README. This could be taken to mean contiguous, as in consecutive by position, or as in consecutive numerically. The the README has been fixed (exercism/problem-specifications#200).

Please verify that the exercise is implemented in this track correctly (that it finds series of contiguous numbers, not series of numbers that follow each other consecutively).

If it helps, the canonical inputs/outputs for the exercise can be found here:
https://github.com/exercism/x-common/blob/master/largest-series-product.json

If everything is fine, go ahead and just close this issue. If there's something to be done, then please describe the steps needed in order to close the issue.

Copy track icon into language track repository

Right now all of the icons used for the language tracks (which can be seen at http://exercism.io/languages) are stored in the exercism/exercism.io repository in public/img/tracks/. It would make a lot more sense to keep these images along with all of the other language-specific stuff in each individual language track repository.

There's a pull request that is adding support for serving up the track icon from the x-api, which deals with language-specific stuff.

In order to support this change, each track will need to

In other words, at the end of it you should have the following file:

./img/icon.png

See exercism/exercism#2925 for more details.

Update ABOUT.md so that it complies with meta#85

Re: exercism/meta#85

In summary, ABOUT.md should now only use:

  • Bold
  • Italics
  • Links
  • Bullet lists
  • Number lists
  • Each sentence should be on its own line
  • Paragraphs separated by an empty line
  • Explicit
    used to split a paragraph into lines without spacing between them (discouraged).

Meetup - 5th Monday

There is an interesting edge case in the meetup problem:
some months have five Mondays.

March of 2015 has five Mondays (the fifth being March 30th), whereas
February of 2015 does not, and so should produce an error.


Thanks, @JKesMc9tqIQe9M for pointing out the edge case.
See exercism.io#2142.

tests don't run under Perl 5.26

Hi!

As of Perl 5.26 (May 2017), . is no longer on the default module include path. As a result, tests like this one:

eval "use $module";
ok !$@, "Cannot load $module"
    or BAIL_OUT "Cannot load $module; Does it compile? Does it end with 1;?";

all fail. I ran into this just now during an interview. If I hadn't known about the 5.26 change, I'm not sure how much more time we would have lost trying to figure this out.

For my interview, I just prepended this to the one test file I was working with:

push @INC, '.';

This is one of the options listed here: http://blogs.perl.org/users/sawyer_x/2017/05/perl-5260-is-now-available.html

Until this is fixed, it will be difficult for Perl newbies using the latest released Perl to get through any of the exercises.

Move exercises to subdirectory

The problems api (x-api) now supports having exercises collected in a subdirectory
named exercises.

That is to say that instead of having a mix of bin, docs, and individual exercises,
we can have bin, docs, and exercises in the root of the repository, and all
the exercises collected in a subdirectory.

In other words, instead of this:

x{TRACK_ID}/
├── LICENSE
├── README.md
├── bin
│   └── fetch-configlet
├── bowling
│   ├── bowling_test.ext
│   └── example.ext
├── clock
│   ├── clock_test.ext
│   └── example.ext
├── config.json
└── docs
│   ├── ABOUT.md
│   └── img
... etc

we can have something like this:

x{TRACK_ID}/
├── LICENSE
├── README.md
├── bin
│   └── fetch-configlet
├── config.json
├── docs
│   ├── ABOUT.md
│   └── img
├── exercises
│   ├── bowling
│   │   ├── bowling_test.ext
│   │   └── example.ext
│   └── clock
│       ├── clock_test.ext
│       └── example.ext
... etc

This has already been deployed to production, so it's safe to make this change whenever you have time.

Verify that nothing links to help.exercism.io

The old help site was deprecated in December 2015. We now have content that is displayed on the main exercism.io website, under each individual language on http://exercism.io/languages.

The content itself is maintained along with the language track itself, under the docs/ directory.

We decided on this approach since the maintainers of each individual language track are in the best position to review documentation about the language itself or the language track on Exercism.

Please verify that nothing in docs/ refers to the help.exercism.io site. It should instead point to http://exercism.io/languages/:track_id (at the moment the various tabs are not linkable, unfortunately, we may need to reorganize the pages in order to fix that).

Also, some language tracks reference help.exercism.io in the SETUP.md file, which gets included into the README of every single exercise in the track.

We may also have referenced non-track-specific content that lived on help.exercism.io. This content has probably been migrated to the Contributing Guide of the x-common repository. If it has not been migrated, it would be a great help if you opened an issue in x-common so that we can remedy the situation. If possible, please link to the old article in the deprecated help repository.

If nothing in this repository references help.exercism.io, then this can safely be closed.

scrabble-score: replace 'multibillionaire' with 'oxyphenbutazone'

The word multibillionaire is too long for the scrabble board. Oxyphenbutazone, on the other hand, is legal.

Please verify that there is no test for multibillionaire in the scrabble-score in this track. If the word is included in the test data, then it should be replaced with oxyphenbutazone. Remember to check the case (if the original is uppercase, then the replacement also should be).

If multibillionaire isn't used, then this issue can safely be closed.

See exercism/problem-specifications#86

Remove obsolete version tracking assertions in exercises

Some tracks have added assertions to the exercise test suites that ensure that the solution has a hard-coded version in it.
In the old version of the site, this was useful, as it let commenters see what version of the test suite the code had been written against, and they wouldn't accidentally tell people that their code was wrong, when really the world had just moved on since it was submitted.

If this track does not have any assertions that track versions in the exercise tests, please close this issue.

If this track does have this bookkeeping code, then please remove it from all the exercises.

See exercism/exercism#4266 for the full explanation of this change.

Grains test doesn't handle bugs caused by `**`

This should fail the test suite:

sub total_grains {
    return 2 ** 64 - 42;
}

The test appears to pass for 2 ** 64, 2 ** 64 - 1 and 2 ** 64 - 2, which can't all be true.

This is because ** is made with C's pow which operates internally on doubles. The current test converts the expected value 18446744073709551615 to a double and an imprecise comparison erroneously succeeds.

The test suite should detect this and report a problem to relieve mentors of having to point it out.

Perhaps the test suite can provide a negative test,

unlike total_grains(), qr/e\+/, "Using '**' without 'use bignum;' uses doubles that are too imprecise for this result.";

Additionally, use 'eq' instead of '==' to compare the result as strings instead of numbers, since then the error message shows up as:

#   Failed test 'returns the total number of grains on the board'
#   at ./grains.t line 16.
#          got: '1.84467440737096e+19'
#     expected: '18446744073709551615'

which should more clearly indicate the problem that the lower digits have vanished.

Perhaps apply this reasoning in the test for grains_on_square 64, since otherwise ** erroneously succeeds in this case, too:

#   Failed test 'square no. 64'
#   at ./grains.t line 23.
#          got: '9.22337203685478e+18'
#     expected: '9223372036854775808'

triangle: incorrect test in some tracks

Please check if there's a test that states that a triangle with sides 2, 4, 2 is invalid. The triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side. If this doesn't affect this track, go ahead and just close the issue.

Fix getting started instructions for perl5

Some exercise README templates contain links to pages which no longer exist in v2 Exercism.

For example, C++'s README template had a link to /languages/cpp for instructions on running tests. The correct URLs to use can be found in the 'Still stuck?' sidebar of exercise pages on the live site. You'll need to join the track and go to the first exercise to see them.

Please update any broken links in the 'config/exercise_readme.go.tmpl' file, and run 'configlet generate .' to generate new exercise READMEs with the fixes.

Instructions for generating READMEs with configlet can be found at:
https://github.com/exercism/docs/blob/master/language-tracks/exercises/anatomy/readmes.md#generating-a-readme

Instructions for installing configlet can be found at:
https://github.com/exercism/docs/blob/bc29a1884da6c401de6f3f211d03aabe53894318/language-tracks/launch/first-exercise.md#the-configlet-tool

Tracking exercism/exercism#4102

Changes to Custom Set tests

We recently rewrote the test suite for Custom Set. Since this track implements Custom Set, please take a look at the new custom_set.json file and see if your track should update its implementation or tests.

The new test suite reorders tests so that students can get to green quickly. It also reduces the number of tests so that students can focus on solving the interesting edge cases.

More details on this change are available in the pull request

Ensure Perl 5 track is ready for v2 launch

There are a number of things we're going to want to check before the v2 site goes live. There are notes below that flesh out all the checklist items.

  • The track has a page on the v2 site: https://v2.exercism.io/tracks/perl5
  • The track page has a short description under the name (not starting with TODO)
  • The "About" section is a friendly, colloquial, compelling introduction
  • The "About" section follows the formatting guidelines
  • The code example gives a good taste of the language and fits within the boundaries of the background image
  • There are exercises marked as core
  • Exercises have rough estimates of difficulty
  • Exercises have topics associated with them
  • The first exercise is auto_approve: true

Track landing page

The v2 site has a landing page for each track, which should make people want to join it. If the track page is missing, ping @kytrinyx to get it added.

Blurb

If the header of the page starts with TODO, then submit a pull request to https://github.com/exercism/perl5/blob/master/config.json with a blurb key. Remember to get configlet and run configlet fmt . from the root of the track before submitting.

About section

If the "About" section feels a bit dry, then submit a pull request to https://github.com/exercism/perl5/blob/master/docs/ABOUT.md with suggested tweaks.

Formatting guidelines

In order to work well with the design of the new site, we're restricting the formatting of the ABOUT.md. It can use:

  • Bold
  • Italics
  • Links
  • Bullet lists
  • Number lists

Additionally:

  • Each sentence should be on its own line
  • Paragraphs should be separated by an empty line
  • Explicit <br/> can be used to split a paragraph into lines without spacing between them, however this is discouraged.

Code example

If the code example is too short or too wide or too long or too uninteresting, submit a pull request to https://github.com/exercism/ocaml/blob/master/docs/SNIPPET.txt with a suggested replacement.

Exercise metadata

Where the v1 site has a long, linear list of exercises, the v2 site has organized exercises into a small set of required exercises ("core").

If you update the track config, remember to get configlet and run configlet fmt . from the root of the track before submitting.

Topic and difficulty

Core exercises unlock optional additional exercises, which can be filtered by topic an difficulty, however that will only work if we add topics and difficulties to the exercises in the track config, which is in https://github.com/exercism/perl5/blob/master/config.json

Auto-approval

We've currently made any hello-world exercises auto-approved in the backend of v2. This means that you don't need mentor approval in order to move forward when you've completed that exercise.

Not all tracks have a hello-world, and some tracks might want to auto approve other (or additional) exercises.

Track mentors

There are no bullet points for this one :)

As we move towards the launch of the new version of Exercism we are going to be ramping up on actively recruiting people to help provide feedback. Our goal is to get to 100%: everyone who submits a solution and wants feedback should get feedback. Good feedback.

If you're interested in helping mentor the track, check out http://mentoring.exercism.io/

When all of the boxes are ticked off, please close the issue.

Tracking progress in exercism/meta#104

Replace JSON with JSON::PP

JSON::PP has been included in perl core since 5.14, and JSON falls back to JSON::PP if JSON::XS is not installed.

I suggest replacing JSON with JSON::PP so that installing JSON is no longer a requirement to be able to run some of the test suites.

Any use of from_json will also need to be replaced with decode_json.

rna-transcription: don't transcribe both ways

I can't remember the history of this, but we ended up with a weird non-biological thing in the RNA transcription exercise, where some test suites also have tests for transcribing from RNA back to DNA. This makes no sense.

If this track does have tests for the reverse transcription, we should remove them, and also simplify the reference solution to match.

If this track doesn't have any tests for RNA->DNA transcription, then this issue can be closed.

See exercism/problem-specifications#148

typos in running tests section http://exercism.io/languages/perl5/tests

This page seems to be referencing the bob exercise but there are a few typos.

The first line says to run the tests to run:
$ prove bob_test.t

The file that comes down with the bob exercise is bob.t and not bob_test.t.

Also:
"create a file in the perl5 directory called "bob.pm", with the following contents:"

The module that comes in with the exercise fetch is named Bob.pm. (capital B).

I can kind of see this stuff just being general documentation on running tests and not specific to the bob exercise but they are close enough that this can be confusing. I could see someone thinking they need to change those filenames which would cause issues.

Update all exercises to use generator

  • linked-list
  • list-ops
  • prime-factors
  • simple-linked-list
  • custom-set
  • palindrome-products
  • proverb
  • pythagorean-triplet
  • queen-attack
  • robot-simulator
  • saddle-points #535
  • simple-cipher #535
  • accumulate #407
  • all-your-base #408
  • allergies #264
  • anagram #265
  • atbash-cipher #266
  • beer-song #267
  • binary-search #269
  • binary-search-tree #408
  • bob
  • clock
  • crypto-square #408
  • difference-of-squares #408
  • etl #360
  • food-chain #408
  • gigasecond #308
  • grade-school
  • grains
  • hamming
  • hello-world
  • house #408
  • kindergarten-garden #408
  • largest-series-product #408
  • leap
  • luhn
  • matrix #408
  • meetup #309
  • minesweeper #408
  • nucleotide-count
  • ocr-numbers #408
  • pascals-triangle #317
  • phone-number
  • pig-latin #325
  • raindrops
  • rna-transcription
  • robot-name #328
  • roman-numerals
  • say
  • scrabble-score
  • secret-handshake
  • series
  • sieve
  • space-age
  • strain #417
  • sublist
  • sum-of-multiples
  • triangle
  • twelve-days #331
  • two-fer
  • word-count
  • wordy

binary DEPRECATED
point-mutations DEPRECATED
trinary DEPRECATED
hexadecimal DEPRECATED

Add helpful information to the SETUP.md

The contents of the SETUP.md file gets included in
the README.md that gets delivered when a user runs the exercism fetch
command from their terminal.

At the very minimum, it should contain a link to the relevant
language-specific documentation on
help.exercism.io.

It would also be useful to explain in a generic way how to run the tests.
Remember that this file will be included with all the problems, so it gets
confusing if we refer to specific problems or files.

Some languages have very particular needs in terms of the solution: nested
directories, specific files, etc. If this is the case here, then it would be
useful to explain what is expected.


Thanks, @tejasbubane for suggesting that we add this documentation everywhere.
See exercism.io#2198.

Delete configlet binaries from history?

I made a really stupid choice a while back to commit the cross-compiled
binaries for configlet (the tool that sanity-checks the config.json
against the implemented problems) into the repository itself.

Those binaries are HUGE, and every time they change the entire 4 or 5 megs get
recommitted. This means that cloning the repository takes a ridiculously long
time.

I've added a script that can be run on travis to grab the latest release from
the configlet repository (bin/fetch-configlet), and travis is set up to run
this now instead of using the committed binary.

I would really like to thoroughly delete the binaries from the entire git
history, but this will break all the existing clones and forks.

The commands I would run are:

# ensure this happens on an up-to-date master
git checkout master && git fetch origin && git reset --hard origin/master

# delete from history
git filter-branch --index-filter 'git rm -r --cached --ignore-unmatch bin/configlet-*' --prune-empty

# clean up
rm -rf .git/refs/original/
git reflog expire --all
git gc --aggressive --prune

# push up the new master, force override existing master branch
git push -fu origin master

If we do this everyone who has a fork will need to make sure that their master
is reset to the new upstream master:

git checkout master
git fetch upstream master
git reset --hard upstream/master
git push -fu origin master

We can at-mention (@) all the contributors and everyone who has a fork here in this
issue if we decide to do it.

The important question though, is: Is it worth doing?

Do you have any other suggestions of how to make sure this doesn't confuse people and break their
repository if we do proceed with this change?

clock: canonical test data has been improved

The JSON file containing canonical inputs/outputs for the Clock exercise has gotten new data.

There are two situations that the original data didn't account for:

  • Sometimes people perform computation/mutation in the display method instead of in add. This means that you might have two copies of clock that are identical, and if you add 1440 minutes to one and 2880 minutes to the other, they display the same value but are not equal.
  • Sometimes people only account for one adjustment in either direction, meaning that if you add 1,000,000 minutes, then the clock would not end up with a valid display time.

If this track has a generator for the Clock exercise, go ahead and regenerate it now. If it doesn't, then please verify the implementation of the test suite against the new data. If any cases are missing, they should be added.

See exercism/problem-specifications#166

Override probot/stale defaults, if necessary

Per the discussion in exercism/discussions#128 we
will be installing the probot/stale integration on the Exercism organization on
April 10th, 2017.

By default, probot will comment on issues that are older than 60 days, warning
that they are stale. If there is no movement in 7 days, the bot will close the issue.
By default, anything with the labels security or pinned will not be closed by
probot.

If you wish to override these settings, create a .github/stale.yml file as described
in https://github.com/probot/stale#usage, and make sure that it is merged
before April 10th.

If the defaults are fine for this repository, then there is nothing further to do.
You may close this issue.

gigasecond: use times (not dates) for inputs and outputs

A duration of a gigasecond should be measured in seconds, not
days.

The gigasecond problem has been implemented in a number of languages,
and this issue has been generated for each of these language tracks.
This may already be fixed in this track, if so, please make a note of it
and close the issue.

There has been some discussion about whether or not gigaseconds should
take daylight savings time into account, and the conclusion was "no", since
not all locations observe daylight savings time.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.