Comments (11)
Issue 8 has been merged into this issue.
Original comment by [email protected]
on 28 Oct 2010 at 10:38
from marioai.
all right! We now have our "Issue 9"! Let's GO for it! ;-)
Original comment by [email protected]
on 29 Oct 2010 at 9:50
- Changed state: Accepted
from marioai.
Erik, Great job!
I would like to suggest you to join as a committer, so that you can include
unit tests and suggestions easier. Write me an e-mail about that - we'll
discuss!
You test contains important thing:
MarioEnvironment is a Singleton and it stores EvaluationInfo concerning only
the latest run. Then it's fine that same task outputs different information
after next run from another task. Benchmark now expects a user to store and
utilize the evaluation info on his own, however now, with various tasks coming
it looks like a nice idea to move this functionality up (to Tasks)
It is more a bit logical or design issue -- if we create two separate tasks, it
could be more natural to store the latest evaluation info in corresponding
task. Not in Environment. Agree! It will be redesigned.
Original comment by [email protected]
on 29 Oct 2010 at 11:20
- Added labels: Type-Enhancement
- Removed labels: Type-Defect
from marioai.
So, fitnesses will remain different if you use
System.out.println(basicTask.getEnvironment().getEvaluationInfoAsString());
this is necessary due to cross-language usage: we propose using only
environment as the sole mean of communication between agent and the Mario
world. Tasks are build up upon Environments.
but I'll add basicTask.getEvaluationInfo() and this data will be Task-specific.
In other languages, that use Mario AI through AmiCo, it will be necessary to
wrap up with own tasks.
Original comment by [email protected]
on 29 Oct 2010 at 12:05
from marioai.
I'll send you an email about that.
Your comments about storage of the EvaluationInfo seem correct! If many tasks
are going to be used at once I think it makes sense.
I also want to clarify what I think is the bigger problem here though.
What I was trying to show is that with the same seed and same arguments that
two different levels are created sometimes. (Of course, this will probably be
fixed if the level generator is being redone in the future.)
Original comment by [email protected]
on 29 Oct 2010 at 12:08
from marioai.
I realized that my I was making the test a bit harder than I thought.
I created a much more isolated unittest which I added to
LevelGeneratorTest.java.
Hopefully, it makes it easier to see specifically what I meant.
Original comment by [email protected]
on 29 Oct 2010 at 12:23
Attachments:
from marioai.
That last test was broken. :)
Making a new one.
Original comment by [email protected]
on 29 Oct 2010 at 12:24
from marioai.
have a look at r615 -- that will fix this issue
Original comment by [email protected]
on 29 Oct 2010 at 12:25
from marioai.
now you can put it directly to SVN!
Original comment by [email protected]
on 29 Oct 2010 at 12:28
from marioai.
Yes r615 did fix the issue. I added a regression test in r617.
Original comment by [email protected]
on 29 Oct 2010 at 12:55
from marioai.
This issue was closed by revision r618.
Original comment by [email protected]
on 29 Oct 2010 at 1:08
- Changed state: Fixed
from marioai.
Related Issues (18)
- Incorrect comparison in LevelScene.getEnemyFloatPos() HOT 3
- Recorder doesn't use level.lvl when run the normal way HOT 2
- LearningEvaluation usage of setArgs ambiguous HOT 5
- -server option missing for TCP HOT 6
- getLevelSceneZ returns 19 x 19 grid HOT 5
- testScaredShooty G10RK5 and GW10G10 fail HOT 3
- Current Unit Test Failures HOT 2
- build.xml is not portable
- getintermediateReward
- Documentation
- LevelSceneTest: testGetSerializedLevelSceneObservationZ fails HOT 4
- MarioAIBenchmarkTest fail due to conversion to intS HOT 3
- evaluateSubmission in LearningEvaluation doesn't use learningAgent to run simulations HOT 6
- Replays cause a null pointer exception HOT 5
- Replays cause a null pointer exception HOT 2
- Running a replay on a file with visibility turned off causes a bug. HOT 2
- Max values in evaluation info are nonsensical HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from marioai.