Code Monkey home page Code Monkey logo

henryrlee / pokerhandevaluator Goto Github PK

View Code? Open in Web Editor NEW
349.0 21.0 74.0 125.13 MB

Poker-Hand-Evaluator: An efficient poker hand evaluation algorithm and its implementation, supporting 7-card poker and Omaha poker evaluation

License: Apache License 2.0

Makefile 0.01% C 99.34% C++ 0.06% CMake 0.01% Python 0.58% Zig 0.01%
poker-evaluator poker-hands 7-card-poker poker-hand-evaluator texas-holdem ph-evaluator omaha-poker-hand texas-holdem-poker omaha-poker cactus-kev

pokerhandevaluator's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pokerhandevaluator's Issues

Exactly 2 hole cards not considered in Omaha

I tried to evaluate a hand for omaha and this is my hand used.

p3 = evaluate_omaha_cards("3s", "5h", "Qs", "Qd", "9h", "Ks", "8c", "Js", "5c")

This results with ranking of 2788 which is Two Pair (Qs Qd 5h 5c) which is wrong since only one card from the hole is considered for the evaluation.

Any solution for this?

Pypi Website Typo Fix

Due to the commit I created, on the website: https://pypi.org/project/phevaluator/ the library usage part is not ideal with VSCode.

The code from phevaluator import evaluate_cards produces Any on VSCode:

from phevaluator.evaluator import evaluate_cards should be added so it provides the documentation:

I'm not quite sure how to fix this as it still imports without error. My inability to fix this is why I suggest to simply update the website portion.

importing phevaluator throws TypeError: 'type' object is not subscriptable

I've been using phevaluator for the past few weeks. Recently I started using it on Kaggle and got the error mentioned while importing it. However, my local import still worked fine. Then I made some changes to my python interpreter and now it's broken on PyCharm too.

The traceback is:
from phevaluator import evaluate_cards
_init_.py > from . import hash as hash_ # FIXME: `hash` collides to built-in function
hash.py > def hash_quinary(quinary: list[int], num_cards: int) -> int:
TypeError: 'type' object is not subscriptable

The FIXME note makes me think this is a known problem, but I don't see any mention of it on this github page. Can anyone confirm if this is something that needs fixing? I'm not entirely sure what the issue is. Thanks!

Large Memory Footprint

Using the Python version of this library: https://pypi.org/project/phevaluator/, there is a large memory footprint of 399.9 MiB when importing evaluate_cards.

Here is the code I used in order to determine this:

from memory_profiler import profile

@profile
def import_phevaluator():
    from phevaluator import evaluate_cards

if __name__ == "__main__":
    import_phevaluator()

and the output:

Line #    Mem usage    Increment  Occurrences   Line Contents
=============================================================
     4     22.8 MiB     22.8 MiB           1   @profile
     5                                         def import_phevaluator():
     6                                             # The import statement you want to profile
     7    422.7 MiB    399.9 MiB           1       from phevaluator import evaluate_cards

This was causing me to get memory issues on my deployment which is how I noticed the problem in the first place.

Now, I'm not sure why as I believe it's only supposed to be ~100kb for a 7 card evaluation. Is this meant to be on top of the already memory used for initialization? I'm assuming it might be because it's loading most of the tables, but from what I saw it's not importing any PLO which I believe is the bulk of the memory usage.

Please let me know if this is expected and recommendations to reduce this given I only need the standard 7 card evaluation.

Refactor `_evaluate_omaha_cards` function for simplicity

The _evaluate_omaha_cards in evaluator_omaha.py is currently too complex and could benefit from refactoring.
Its complexity makes it difficult to understand and maintain.

Here's the function in question:

def _evaluate_omaha_cards(community_cards: list[int], hole_cards: list[int]) -> int:
    # ...

This function performs several tasks, including counting suits, determining flushes, and calculating hashes. These tasks could potentially be broken down into smaller, more manageable functions.

Additionally, the function uses magic numbers (like 10000 for value_flush and value_noflush), which could be replaced with named constants to improve readability.

Finally, the function is quite long, which makes it harder to understand at a glance. Breaking it down into smaller functions would make it easier to understand and test.

Acceptance Criteria:

  • The _evaluate_omaha_cards function is broken down into smaller, more manageable functions.
  • Magic numbers are replaced with named constants.
  • The refactored code is covered by unit tests to ensure it still works as expected.

Feature request: describe card name(e.g. "Qc")

The describe method for the Card class might be convenient as the Rank::describeSampleHand() is.

example (C++):

auto c1 = phevaluator::Card("2c");
std::cout << c1.describeName() << "\n"; // 2c
std::cout << c1.describeSuit() << "\n"; // c
std::cout << c1.describeRank() << "\n"; // 2
std::cout << std::string(c1) << "\n"; // 2c
// std::cout << c1 << "\n"; // 0. overloading << is too much.

auto c2 = phevaluator::Card("2C"); // case insensitive. doesn't store original string
std::cout << c2.describeName() << "\n"; // 2c

phevaluator::Card c3{0};
std::cout << c3.describeName() << "\n"; // 2c

example (Python):

c1 = Card("2c")
print(c1.describe_name()) # 2c
...
print(str(c1)) # 2c

Sounds good? Then, I will write it both in C++ and Python and send PR.
However, I'm not sure if the function name "describeName" is appropriate whereas "describeSuit" and "describeRank" seem not bad.
Could you tell your opinion? @HenryRLee

Python unittest failed with directory-wise test

unittest has feature to test multiple files in directory at once with following command.
python -m unittest discover DIRECTORY_NAME

However, python -m unittest discover table_tests was failed false-positively at test_noflush7_table although it passed with single file.
This occurs in both v0.3.1 and v0.1.0.
It seems that those tests share same name space and variable name collided.

I don't have any idea to resolve it and I know README says test with single file. This is just a report.

  • python -m unittest discover table_tests:
......E                                                                                                                                  ======================================================================                                                                   ERROR: test_noflush7_table (test_hashtable7.TestNoFlush7Table)
----------------------------------------------------------------------
Traceback (most recent call last)
File "/tmp/PokerHandEvaluator/python/phevaluator/table_tests/utils.py", line 116, in setUp                                         self.mark_four_of_a_kind()
File "/tmp/PokerHandEvaluator/python/phevaluator/table_tests/utils.py", line 71, in mark_four_of_a_kind                            self.mark_template((4, 1))
File "/tmp/PokerHandEvaluator/python/phevaluator/table_tests/utils.py", line 62, in mark_template
hash_ = hash_quinary(hand, 13, self.NUM_CARDS)
File "/tmp/PokerHandEvaluator/python/phevaluator/evaluator/hash.py", line 9, in hash_quinary                                       sum_numb += DP[q[i]][length - i - 1][k]
IndexError: list index out of range                                                                                                                                                                                                                                               ----------------------------------------------------------------------
Ran 7 tests in 17.218s
FAILED (errors=1)   
  • python -m unittest table_tests/test_hashtable7.py:
.
----------------------------------------------------------------------
Ran 1 test in 187.909s
OK

Publish `Changelog`

The project currently lacks a formally published changelog. A well-maintained changelog is essential for users and contributors to track changes, updates, and fixes over time.

Task

  • Create a changelog that documents the historical and upcoming changes in a clear and structured format.
  • Publish the changelog in a suitable section of the project repository, such as under a CHANGELOG.md file or a dedicated section in the documentation.

Optional Task

  • Once the changelog is published, consider updating the Changelog field in the [project.urls] section of pyproject.toml to link directly to the changelog. Current placeholder URL is https://github.com/HenryRLee/PokerHandEvaluator/tags.

Relevant Files

  • Potentially new CHANGELOG.md or updates to existing documentation files.

CI with Github Action workflow

Previously we used Travis as the CI travis.yml.

Since GitHub has its builtin CI Action, we should facilitate it.

Expectations:

  1. Migrate the .travis.yml file to .github/workflows/main.yml
  2. Pass the workflow

Noticed a bug

The evaluator heavily favours hand with draws (potential flushes or straights) to win over higher ranking hands even on the river.

'type' object is not subscriptable----PYTHON

Thanks for your contributions.

When I implemented it using python 3.8, I meet the bug:

/usr/local/lib/python3.8/dist-packages/phevaluator/init.py in
1 """Package for evaluating a poker hand."""
----> 2 from . import hash as hash_ # FIXME: hash collides to built-in function
3 from . import tables
4 from .card import Card
5 from .evaluator import _evaluate_cards, evaluate_cards

/usr/local/lib/python3.8/dist-packages/phevaluator/hash.py in
3
4
----> 5 def hash_quinary(quinary: list[int], num_cards: int) -> int:
6 """Hash list of cards.
7

TypeError: 'type' object is not subscriptable

So are there any solutions?
THANKS

Testing omaha calculation

Nice work! Really!

I'm trying to evalute omaha hands EvaluateOmahaCards function but it shows 0 as result.

hand is "AsKs5d7c" and board "Ts7d8h4s2c". So I expect that inputs for evaluate_omaha_cards
is evaluate_omaha_cards(35, 21, 26, 11, 0, 51, 47, 13, 20)

and line value_noflush = noflush_omaha[board_hash * 1820 + hole_hash] - gives 0

is it something wrong with noflush_omaha table or I've missed the point? Am I right that you generated it using Kev function iterating non flush hands?

Add GitHub workflow status badge to the README

Previously we were using Travis as the CI, and currently the badge in the README is still showing the status of the Travis build.

[![Build Status](https://travis-ci.org/HenryRLee/PokerHandEvaluator.svg?branch=master)](https://travis-ci.org/HenryRLee/PokerHandEvaluator)

Now, since we've migrated to GitHub workflow, and deprecated the Travis build, we should replace the Travis badge with the GitHub workflow badge. Here is a good reference: https://docs.github.com/en/actions/monitoring-and-troubleshooting-workflows/adding-a-workflow-status-badge.

Resolve edge cases in `test_dptables.py`

In the test_dptables.py file, there is a TODO comment that needs to be addressed.
The comment is located in the loop that generates combinations and updates the table variable, specifically in the case where idx is in [72, 520, 576].

Here's the relevant code:

if idx in [72, 520, 576] and SUITS[idx] != suit + 1:
    continue

The TODO comment suggests that there might be an issue with the way the table is being updated when idx is one of the specified values. This needs to be investigated and resolved.

Acceptance Criteria:

  • Implement a fix if necessary.
  • Add or update unit tests to cover these specific cases.
  • Ensure that all tests pass after the changes.
  • Remove the TODO comment once the issue has been resolved.

CMake: Build Google Benchmark without DEBUG flag

This is what happens when I trying to run the benchmark using Google Benchmark library.

$ ./benchmark_phevaluator
2019-10-15 12:03:24
Running ./benchmark_phevaluator
Run on (2 X 2300 MHz CPU s)
CPU Caches:
  L1 Data 32K (x1)
  L1 Instruction 32K (x1)
  L2 Unified 256K (x1)
  L3 Unified 46080K (x1)
Load Average: 1.00, 1.00, 0.84
***WARNING*** Library was built as DEBUG. Timings may be affected.
--------------------------------------------------------------------
Benchmark                          Time             CPU   Iterations
--------------------------------------------------------------------
EvaluateAllFiveCardHands    63333878 ns     63333194 ns           11
EvaluateAllSixCardHands    527703244 ns    527703655 ns            1
EvaluateAllSevenCardHands 3744382994 ns   3744330646 ns            1
EvaluateAllEightCardHands 22891262082 ns   22891195681 ns            1
EvaluateAllNineCardHands  131418769659 ns   131418236092 ns            1
The command "./benchmark_phevaluator" exited with 0.

It appears that I built it in a debug mode. I believe it is something configurable in the CMake file, but I couldn't find the correct solution. Help is wanted.

Perhaps this would be helpful: https://github.com/google/benchmark#debug-vs-release

Wrong evaluaton?!

p1 = evaluate_cards("9H", "JH", "7D", "2S", "KC", "7C", "7H")
p2 = evaluate_cards("9H", "JH", "7D", "2S", "KC", "JC", "5H")

print(f"The rank of the hand in player 1 is {p1}")
print(f"The rank of the hand in player 2 is {p2}")

The rank of the hand in player 1 is 2084
The rank of the hand in player 2 is 4059

p2 is higher than p1 ...
what am i missing p1 has a tiple while p2 has only a pair ...

short deck evaluation

Hello. I'm attempting to add support for short deck poker (9 rank deck 6-A rather than 13 rank 2-A). Right now I'm stuck trying to generate flush_sd and nonflush_sd look up tables (LUTs). I believe creating separate LUTs for this game is necessary because a flush beats a full house and A6789 is a straight in short deck. Would you agree that this is the best course? If so, can you recommend a strategy for creating these LUTs? I'm planning to submit a PR if i can figure out how to get this working.

I have calculated the number of distinct hands possible for each category as:

straight_flush:6,
four_kind: 72,
full_house:72,
flush:120,
straight:6,
three_kind:252,
two_pair:252,
one_pair:504,
high_card:120

Which gives these offsets:

6,78,150,270,276,528,780,1284,1404

I'm using the existing evaluate_5_cards method to find the correct rank/category. But I'm not sure how to construct the rest of the rank. Do you have any suggestions?

Tables for 6 cards

Hey, super nice repo and the docs are very interesting.

I was wondering if you could add the tables for 6 cards to the repo or provide the code to do so?
Those are needed if you're looking at hand ranks on the Turn of hold'em or if you want to calculate draws.

Would greatly appreciate it. Thanks.

Publish Python documentation and file it for package

The pyproject.toml requires link to the documentation but it directs the general documentation of the algorithm. This link is crucial for developers to access detailed documentation about the project. Once published, we can link to it.

Task

  • Publish Python package documentatnion
  • Update the Documentation field in the [project.urls] section of pyproject.toml with the actual URL of the published Python documentation.

Additional Information

Please ensure the documentation is comprehensive and accessible before updating the link.

dptables.c suits

Hi @HenryRLee, I'm trying to generate the array of suits[4609], then I got a different value of suits[520].
"dptables.c" suits[520] is 3, I got 1.
Is there something wrong?

Issue with hash.py when importing python module

I'm attempting to use the phevaluator python module for a personal project of mine however when trying to import I get an error in regards to the hash.py file provided with 0.5.1

I've not been able to actually attempt to utilize the module yet due to the error shutting down the program right at the import step.

Here is the error I receive:

Traceback (most recent call last):
File "/home/bsstibich/Documents/poker-board-evaluator/main.py", line 1, in
from phevaluator import evaluate_cards
File "/home/bsstibich/.local/lib/python3.8/site-packages/phevaluator/init.py", line 2, in
from . import hash as hash
# FIXME: hash collides to built-in function
File "/home/bsstibich/.local/lib/python3.8/site-packages/phevaluator/hash.py", line 5, in
def hash_quinary(quinary: list[int], num_cards: int) -> int:
TypeError: 'type' object is not subscriptable_

Any help would be appreciated because this module is the only poker module I could find that does exactly what I need for my specific project and would love to be able to use it.

[C++]Add more test code in unit test of the Rank type

The unit test in https://github.com/HenryRLee/PokerHandEvaluator/blob/master/cpp/test/rank.cc only covers a small number of cases. Especially the TestRankCategory and TestRankDescription, only a full house and a straight flush appears in the test. I am expecting 3 to 5 hands from each ranking category are covered in the test code. For example, choose 3 hands from the straights, 5 hands from the pairs, etc.

You may refer to the source code in 7462.c, rank.c and rank.h when coding the unit test cases.

PS: When running the unit tests, you can use ./unit_tests --gtest_filter="Rank*" to test only the test cases related to the Rank type, and skip the time-costing enumeration test.

Documentation errors

I've read the algorithm documentation thoroughly and it seems to have many mistakes regarding the dp array.

dp meaning

The explanation for dp is:

Let's use a 3-d array dp[l][n][k] of size 4148, where n is the number of bits, k is the remaining number of k, and l is the most significant bit of the excluding endpoint.

It says "excluding", but later on it says that dp[1][i][1] = i because:

...3-bit quinary with k=1, it has three instances 001, 010, 100

Isn't 100 supposed to be excluded here?

Unclear l and n values

What does it mean when l = 1 and n = 1? The range [0, 1) doesn't make much sense and for any k that's not 0, dp[1][1][k] should return 0.
Also, in the docs it says:

  for each i in [2, 13] and j in [2, 7]:
    dp[1][i][j] = SUM{k:[0,4]}dp[0][i-1][j-k];

The meaning of dp[0][3][1] for example means "the range between [000,000)" which makes no sense.

I looked at another PR/Issue (#2) where the code was included and it seems like l should be 1 there, but even in that case it makes no sense because dp[1][2][2] should equal 1 (only 02 is valid between [00, 10)), but here it's going to return a different answer because dp[1][1][2] + dp[1][1][1] + dp[1][1][0] = 1 + 1 + 1 = 3.

Can you please clarify these issues?
Thanks!

CMakeLists.txt tries to install header that doesn't exist

See this line. It is trying to install phevaluator5.h, but that file doesn't exist in the project. Based on the files in the project it looks like the 5 just needs to be removed from the file name? It compiles if I remove that 5. Lmk if that's the desired fix and I can MR it.

Code style for Python

The standard Python code style recommends 4-space indentation.
https://www.python.org/dev/peps/pep-0008/#tabs-or-spaces
On top of that, we can use formatter withtout user-defined configuration when it's 4-space.

Hence, I formatted .py codes by the standard formatter and made some configuration to ignore inaproperiate files(both file-by-file and line-by-line) with keeping in mind transparency and reproducibility.
After that, I attempted to open PR (#39) and was aware of the guideline of 2-space indentation.

Depending on the editor to use, switching indentation may be uncomfortable and 4-space indentation can be felt something odd to most of developers except Python users.
.editorconfig allows to switch settings by file-extension but it's unclear whether it's usable or not.
https://editorconfig.org/

How do you think?

Help to compile properly needed

I'm a total newbie when it comes to C/C++. I'm hoping someone can help with a problem I encounter when following the instructions.

All steps in the guide worked:

mkdir -p build
cd build
cmake ..
make

For the last one, I had to go back up to the cpp directory. I see the libpheval.a file as expected, in the cpp folder.

However, no unit_test binary is created. I searched the whole folder of PokerHandEvaluator for 'unit_test', only 2 files are returned: unit_tests.vxproj and unit_tests.vxproj.filters

Anything I should be doing/not doing here?

How did you calculate values in "choose[53][8]"

Hi, this is great software and the explanation is very good. I think I follow everything except for how the values in the table choose[53][8] were derived and what they signify, not sure how we've gone from the 52 cards options and seven actual cards to the numbers in the table.

please could you help me out on this.

thanks

Rank operators bug

Hi!

Me and my friends are working on a project together and we are required to use poker evaluation as part of the project.
We used PokerHandEvaluator library to compare poker hands (5vs5). It works well, but there was a case on our Unitests that we notice a bug in the library, rank.h, <,>,<=,>= operators (https://github.com/HenryRLee/PokerHandEvaluator/blob/master/cpp/include/phevaluator/rank.h).
We notice that the rank object returns true when we compare two poker hands that supposed to be marked as a tie.
For example:
Hand 1: 10DIAMOND, 11HEART, 12SPADE, 13CLUB, 1DIAMOND
Hand 2: 10HEART, 11DIAMOND, 12DIAMOND 13HEART, 1CLUB
We used the operator < to compare the two and the returned value was true.
Source code:

bool operator<(const Rank& other) const {
    return value_ >= other.value_;
  }

  bool operator<=(const Rank& other) const {
    return value_ > other.value_;
  }

  bool operator>(const Rank& other) const {
    return value_ <= other.value_;
  }

  bool operator>=(const Rank& other) const {
    return value_ < other.value_;
  }

Our fix is:

bool operator<(const Rank& other) const {
    return value_ > other.value_;
  }

  bool operator<=(const Rank& other) const {
    return value_ >= other.value_;
  }

  bool operator>(const Rank& other) const {
    return value_ < other.value_;
  }

  bool operator>=(const Rank& other) const {
    return value_ <= other.value_;
  }

Test more Python evaluation results

Currently, in Python, the unit tests for our 5-card, 6-card, and 7-card evaluator are using data set in 5cards.json, 6cards.json, and 7cards.json. The dataset is small and fixed. So the test coverage is far from enough.

Since we cannot use Kev's evaluator like the C++ code, my idea would be something like this:

  1. For 5-card evaluation, generate all possible hands and stored them in 5cards.json. There will be 2,598,960 lines, which is acceptable. (I just found that we happened to have this data set under ../tree/develop/test/five).
  2. For 6-card and 7-card evaluation, randomly sample thousands of hands, and brute force the expected result using the 5-card evaluator. This is similar to the unit test case used in the Omaha test.

winning hand for omaha evaluator

Hi

I'm testing this for evaluating omaha winning hand. Everything is fine and I get short wining hand description as eg. "4 4 4 A K".

Is it any way to get or calculate "real" winning hand with ranks and suits from rank number? eg ( "4s 4c 4d Ac Kh" )

It would be very useful to see which are the winning cards from community and player cards.

speed up CI with caching

It seems that Google test takes time long. If building Google test is static (independent from this project), how come caching its build with Google test version name as a key (it enables us to skip same version build)?
https://github.com/actions/cache

I'm not friendly with CMake, Make, linking and compiling so I'm not sure whether it's possible or not but separating Google test from CMake may make it faster.

No makefile found after running cmake

I tried to build the C++ library according to the README in the "cpp" directory, and got an error when I tried to run the "make" step because no makefile was found. Is a makefile supposed to be generated by the "cmake" step, or maybe the makefile was not committed?

KISS not seen yet

Guys I have to say: good ideas on how approach the hand ranking problem, but then you lose the audience in the build phase: I mean is good to have unit tests built automatically using googletest, but by any chance we will have a clear idea on how to have something that works? libpheval is yet to be seen declared anywhere...

So this is a general issue on clarity as there are missing instructions on:

  1. assemble the code without manual intervention (even using CMake I need to adjust the folders manually);
  2. generate any solution, i.e. a .dll or at least tell the uninitiated how to create/see the "ghost library" libpheval...

Cool stuff tough, would be good to be able to use it :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.