> > > Hmmmmmm. How about "pressing the double down button"?
> >
> > That would lump together strings of bets between those hands
> > when you chose to double down. You can do that, but you'll
> > never convince me that it is mathematically equivalent to "EV".
> > It is measuring "average payoff for strings of wagers between
> > doubling". That ain't EV.
>
> Thank you Steve for agreeing with my position. All I've ever
claimed
> was "you can do that". I originally stated "This is one of those
> problems where the answer is based on how you ask the question".
In
> other words, ER "can" have different meanings. As I stated
> previously, you may not like one approach but that does not mean
it
> can't exist (Harry's position).
I don't believe I was agreeing. When I said "you can do that" I
meant
that you can compute some sort of metric that way. I firmly believe
that it is incorrect to call that metric by the name "ER" because I
don't think it represents the same concept that most people
associate with "ER".
Just because MOST people have gotten use to method of operation does
not mean a better approach does not exist. If you think of ER in
terms of a relationship between money bet and money won then there
exists a family of these relationships. While each member of the
family may be different there is also a commonality that should not
be ignored.
> > Part of my distaste for what you're doing comes from not being
> > able to say in advance how many wagers you'll make during
> > the "next play". For all games of chance that I've studied,
> > including blackjack where doubling and splitting are allowed,
> > EV is computed per play, and doubling/splitting is not counted
> > as a separate wager.
>
> OTOH, it is possible for a person to have an algorithm where you
can
> compute the number of wagers. This algorithm can be part of the
input.
I'll agree with that in principle, but I'm not convinced that this
case
is a good example. The problem here is that you're taking what I
would call a single outcome and treating it as if it was two
independent
results. The key word there is "independent". The payoff from the
original bet and from the doubling (or redoubling) are mixed
together
and completely correlated, in sharp contrast to any other two
outcomes
from the game. This messes up computation of variance.
Just because something is difficult should not be a reason to ignore
it.
Let me ask you this. Would you feel it was OK to arbitrarily take
the
payoffs from royal flushes and count each of those as if they were
ten independent wagers of 1/10 the size that were paid separately?
That would effectively dilute the EV and greatly reduce the variance
of the game tremendously, but it isn't kosher because it pretends
that the outcomes are not correlated. That's effectively what you
are doing when you try to split up the outcomes from the original
bet and the double -- treating them as if they are not correlated
at all when in fact they are completely correlated. I believe it
is mathematically incorrect to handle payoffs that way. If you
can tell my why these outcomes aren't correlated, or how to
justify (in a mathematical sense) ignoring that correlation, then
I might be presuaded to agree with you.
I have never said the two were not correlated. In fact, I have said
just the opposite. My position is to extend the definition to
encompass the doubling event to produce "another" relationship that
may be of interest.
> They are different
> because you choose to ask the question differently (back to my
> original position) and, most importantly, they may lead to
insights
> not available by another approach. BTW, I've never stated that one
> could use a different result outside the context where it
originated.
Perhaps we're getting somewhere. I believe that choosing to
"ask the question differently" is usally equivalent to "asking
a DIFFERENT question" in the sense that you are fundamentally
changing what you are measuring when do this. That is related
to my point about not trying to keep calling it an apple when
you've really started talking about an orange.
I never said it should be called exactly the same thing. However, I
do think the fact this value is correlating similar things should be
highlighted in some way. In that sense, they are more like a gala
apple and a delicious apple.
> > Of course I'm not saying there is only one way to analyze games
or
> > to produce strategies that are optimal in different ways. What
I'm
> > saying is that only ONE of those methods of analysis is truly
from
> > an EV perspective. Other methods look at RoR or "cost" or some
> > other aspect. You can group bets together in different ways as
> > well, but doing things in different ways and trying to attach
the
> > same name to it only adds confusion.
>
> I've never stated that the term EV should be attached to anything.
> All I've ever claimed is that by stating the problem differently
you
> can get different answers and with those answers different
insights.
> Now I think you're beginning to understand why I was amazed that
you
> would take the position this was impossible.
Based on the summary you give below, it still appears to me that
you are claiming exactly that very thing -- that what you're
computing
is still ER but it is the ER obtained by viewing the game in a
different
way.
It is a member of the same ER family but not the commonly used ER.
However, that is not to say it isn't a "better" definition.
> > If you're looking at the same game, played with the same
strategy,
> > and you compute two different "aspects" and end up with
different
> > numbers (when expressed in the same base, to dispense with
> > the octal/decimal nonsense), then you are comparing apples to
> > oranges.
>
> I beleive you are saying that if you ask the same question then
you
> should get the same answer ... Of course. This has nothing to do
with
> my claims, which I now hope you better understand. Let me go over
the
> complete history:
>
> This whole issue got started when ckonwin stated "The double
feature
> does change the expected return of the game". mklpryy24
> responded "Not quite, Lets get into the math, 100% (the amount
bet) X
> 99.5% ( the return of JOB) X 100% ( the even odds of the double
up) =
> 99.5%". Next, I stated "This is one of those problems where the
> answer is based on how you ask the question". I next provided an
> example where one might want to examine the effect of doubling on
> their win/loss rate. Finally, mklpryy24 responded "don't matter".
>
> Understanding this win/loss rate in my example was somewhat
simpler
> if you use another way of defining the expected return. IMO, using
> this alternative approach provides more insight into how the game
> will play than simply trying to apply commonly used ER.
This still sounds exactly like you are trying to stretch the
concept of
"expected return" to cover two things that are fundamentally
different.
That is my only objection to your position.
First of all the two things are NOT fundamentally different. They are
only different forms of the same kind of relationship based on
changing the way the question is asked (base assumptions).
Dick, I think I understand what you are saying a lot better than you
give me credit for. I used to believe the very thing that you are
talking about, and I used to put forth arguments very similar to
some that you are using here, but in the back of my mind there
was something that bothered me about it. Thinking about this
very issue contributed to clarifying my thinking about alternate
strategies and different objectives and what it really means to
measure something that is fundamentally different than EV (or
ER, which is really just another name for the same concept).
The most basic realization that came from thinking about this
is that sometimes very subtle changes in how you ask the
question lead to measurements that seem like EV but are
not really quite the same thing. I believe what you're doing
here amounts to ignoring a very real correlation between
the dollars that come out from the original bet and the dollars
that come out from doubling. I also believe that when you
compute a number that ignores correlation between
outcomes, then whatever it is you're computing it isn't
something that should be described as ER. My own
belief is that ER/EV should be reserved for describing
an average applied to a probability distribution that
represents independent/uncorrelated outcomes.
I certainly agree with you that there are insights to be
gained by viewing games in different ways and asking
different questions in order to understand new ways
to measure our results. That is the basis or my alternate
strategies, but I've tried very hard to be careful not to
claim that these alternate perspectives are really the
same thing as ER, because they are in fact different.
I think I have already made my point. I think we concur that the two
ERs are different but disagree on whether their relationships should
be open to further discussion. But, I will digress a little because I
think it is important to understand that extending knowledge often
REQUIRES taking commonly accepted principles and studying them from
different viewpoints.
For example, classical physics reigned for many centuries. Those
knowledgeable in the field assumed they understood nature completely.
However, experimental physics eventually moved down to the atomic
level. Lo and behold, those sacrosanct laws did not hold up. What
appeared is what we now called quantum physics which explained the
world "better" than did classical physics. The first developers of
quantum theories were initially laughed at and it took a long time
before the physics establishment was converted. I'm not claiming that
viewing ER in different ways is in the same ballpark, but I think
being open to new ways of looking at old paradigms is a good idea.
Dick
···
--- In vpFREE@yahoogroups.com, Steve Jacobs <jacobs@...> wrote:
On Tuesday 06 November 2007 1:37 am, mroejacks wrote:
> --- In vpFREE@yahoogroups.com, Steve Jacobs <jacobs@> wrote: