wolframscience.com

A New Kind of Science: The NKS Forum : Powered by vBulletin version 2.3.0 A New Kind of Science: The NKS Forum > Applied NKS > Easter Egg Problem
  Last Thread   Next Thread
Author
Thread Post New Thread    Post A Reply
George Danner
Industrial Science, LLC
The Woodlands, Texas

Registered: Oct 2003
Posts: 1

Easter Egg Problem

Hi Folks--

Look at the attached description. I have some thoughts on how to pursue solutions to this problem using NKS principles. But before I get started, I wanted to share this with you all. Any comments are welcome.

George Danner

Attachment: the easter egg problem.pdf
This has been downloaded 2052 time(s).

Report this post to a moderator | IP: Logged

Old Post 01-21-2004 12:50 AM
George Danner is offline Click Here to See the Profile for George Danner Click here to Send George Danner a Private Message Click Here to Email George Danner Visit George Danner's homepage! Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

The problem reminds me of predatory models in early mathematical biology. Biologists trying to model the way changing population of a predator changed the population of its prey found that the result was sensitive to assumptions about "saturation" and how much harder it becomes to find a scarcer thing when more agents are looking. Which in turn depends on detailed particulars of how the agents search. That is just a history item about where a similar problem arose at least once before. It probably also came up in operations research, if e.g. the eggs are U-boats or what have you.

The first thing I notice is that the problem conditions seem underspecified or can vary widely. That is, there are many different ways one can set the problem and what counts as a more successful solution. And which procedure will wind up giving good results will vary with the details. Effectively, the detailed conditions of the game set up an incentive or "price" structure, that will impact different rules in different amounts.

Suppose I want to make sure I find absolutely all of them, that it is hard or "expensive" to make any new "agents", that sophistication of the rule is also "expensive", but time spent searching is "cheap". Then one simple rule would be a single agent in a left or right hand spiral. It would take it a long time to find any of the "eggs" far from the starting point. But it wouldn't miss any or require any additional agents.

If additional agents are completely free, one could just split them at every step while moving to a random neighboring location. You'd get an exponential pile up in the number of agents, an inefficiently oversearched "blob" in the middle, and an expanding ragged edge.

If oversearching is effectively expensive but otherwise agents aren't (they "pay for" themselves if searching new space, otherwise not), then 2 (or a bit faster, 4) "gliders" on the grid axes that each send out new agents to their right at every step would search the whole space.

That leaves places near the diagonals searched but only late. If oversearching is only moderately wasteful compared to time, the last could be improved by adding "ladder" stepping (right, up, right, up, etc) gliders between each of the 4 along the grid axes, that again send out to the right, this time on every other step.

If one relaxes the requirement to find them all, so that e.g. one only judges them by eggs per unit of searching effort over some shorter span of time, then methods that get farther from the starting point sooner, but do not search exhaustively, might improve on schemes like those above.

In addition to changing what counts as "better" judging the overall end result, the conditions can also be varied by making allowable actions or budget constraints depend on successes already achieved. That is, instead of saying new agents are so and so important compared to search time used, one might allow new agents to be created only when an "egg" is found.

Or allow some other budget constraint to be exceeded or to increase when an "egg" is found. This has a different effect than just "charging" at the end in the "final score", because e.g. some procedures may effectively require more "up front investment" while only finding lots of "eggs" in return, late.

The point is the efficiency of the rule depends on the specific details of the test conditions and trade offs (between search time, finding all vs finding some, number of agents used, sophistication of agents, etc), and in general a rule that is best under one set of trade offs may not be best under a different set of trade offs.

Report this post to a moderator | IP: Logged

Old Post 01-23-2004 03:58 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Carl Lumma
(Available for hire)
Berkeley, California

Registered: Feb 2004
Posts: 2

Suppose I want to make sure I find absolutely all of them, that it is hard or "expensive" to make any new "agents", that sophistication of the rule is also "expensive", but time spent searching is "cheap". Then one simple rule would be a single agent in a left or right hand spiral. It would take it a long time to find any of the "eggs" far from the starting point. But it wouldn't miss any or require any additional agents.


You need to know the boundaries of the landscape for this approach to work, and the way I read the problem such is not known.

The point is the efficiency of the rule depends on the specific details of the test conditions and trade offs (between search time, finding all vs finding some, number of agents used, sophistication of agents, etc), and in general a rule that is best under one set of trade offs may not be best under a different set of trade offs.


Actually I doubt any of these details matter. As long as the agents are independent, with only a local view of the landscape, and the eggs are randomly placed, a No Free Lunch result should apply. That is: random search is optimal. (try googling Wolpert and/or Macready).

As for the breadcrumb variant, then what you have is basically ants, and I'd be shocked if the algorithm used by ants doesn't perform optimally...

http://mute-net.sourceforge.net/howAnts.shtml

Later,

-Carl

Report this post to a moderator | IP: Logged

Old Post 02-10-2004 06:30 PM
Carl Lumma is offline Click Here to See the Profile for Carl Lumma Click here to Send Carl Lumma a Private Message Click Here to Email Carl Lumma Visit Carl Lumma's homepage! Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

I read it as without boundary, and my comments on the spiral depended on that. Yes if you introduce obstacles or edges one can't go past (a special case of obstacles) then you change the problem.

I think the No Free Lunch (NFL) theorems say something rather close to what I was saying, which is that no one general algorithm will outperform regardless of the test condition details and "costs". NFL theorems do not say that random search is always just as good as something else, in each particular test.

Instead they say, if you average over a whole space of possible test conditions, then no one algorithm will systematically outperform any other, including random search. Random search is a sort of benchmark, not an optimum, but a baseline that a supposedly always superior algorithm must be able to beat to count as "always superior".

The basis of NFL theorem results, though, is the space of tests concept, not random search. Which I spoke of as underspecification of the test conditions. NFL theorems do not say - they'd be wrong if they did - that for any given specific test conditions, no algorithm can beat any other, or most others. Specific test conditions include the factors I discussed as "costs" and "tradeoffs" in the goal or scoring etc.

Another way to see the same point is to imagine a competition, where you set an algorithm that is supposed to be better than others and in response, I get to reset the test conditions and costs. I will always be able to find a set of costs that will make your algorithm underperform others. The superiority of an algorithm is relative to a fully specified test, and does not generalize to any possible test, any set of costs. As the cost varies, so will which algorithm works best.

The sort of statement the NFL theorems disprove are things like "adaptive rules will always do better than non-adaptive rules". Not, "there is no fastest (e.g.) way in this exactly specified case". They aren't "no lunch" theorems. They are no free lunch theorems. Doing better when the costs are over here implies doing worse when the costs are over there. It is a point about trade offs, which must show up when you sum over all possible problems. (Thus for example the further point about "straw man algorithms", which can be picked to underperform a specific other algorithm over all problems under a specific choice of costs).

As for the ants, I doubt they are perfectly optimized. They operate under specific limitations, which can be thought of as a cost set, that make some methods much easier. They may be very good within that cost set. Relax those limitations, though, and there may well be possible improvements.

Although it isn't directly related, I can't resist adding a fun little paradox waiting at the very entrance of evolutionary theory, for those who think anything evolved must be perfected. What happens to perfect predators - that is, perfect at predation? Answer, they become extinct.

(What happens to their own numbers, at first, if they are so good? Ok, then what happens to their prey? Then what? The inherent superiority of cooperation to predation makes perfection in a predator a failing, not an asset. Imperfection at predation is an "unintended" form of "husbandry", like leaving ungleaned grain as seed).

Report this post to a moderator | IP: Logged

Old Post 02-11-2004 08:50 AM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Carl Lumma
(Available for hire)
Berkeley, California

Registered: Feb 2004
Posts: 2

NFL theorems do not say that random search is always just as good as something else, in each particular test.


Right, which is why for a problem with no a priori setup, which is what the present problem appears to be, the sensible thing to do is implement random search.

The basis of NFL theorem results, though, is the space of tests concept, not random search. Which I spoke of as underspecification of the test conditions.


Why are you trying to make this problem into something it's not?

As for the ants, I doubt they are perfectly optimized. They operate under specific limitations, which can be thought of as a cost set, that make some methods much easier. They may be very good within that cost set. Relax those limitations, though, and there may well be possible improvements.


They operate under an exceedingly minimal cost set, which suggests they are good for general problems with no a priori conditions, like this Easter Egg thing you may have heard of.

Although it isn't directly related, I can't resist adding a fun little paradox waiting at the very entrance of evolutionary theory, for those who think anything evolved must be perfected. What happens to perfect predators - that is, perfect at predation? Answer, they become extinct.


Yes, that's very deep. Try starting a new thread.

I wasn't implying that the ant strategy is good because it's 'natural'.

-Carl

Report this post to a moderator | IP: Logged

Old Post 02-11-2004 05:43 PM
Carl Lumma is offline Click Here to See the Profile for Carl Lumma Click here to Send Carl Lumma a Private Message Click Here to Email Carl Lumma Visit Carl Lumma's homepage! Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Just some resources for those interested in exploring further:

links related to the main "no free lunch" paper by Wolpert and Macready -

http://citeseer.nj.nec.com/wolpert96no.html

NKS book note on combinatorial optimization -

http://www.wolframscience.com/nksonline/page-985

NKS book section referred to, on the difficulty of satisfying constraints -

http://www.wolframscience.com/nksonline/page-342

The attachment is the Wolpert and Macready paper in pdf.

I hope some of this is useful.

Attachment: wolpert96no.pdf
This has been downloaded 1374 time(s).

Report this post to a moderator | IP: Logged

Old Post 02-12-2004 06:52 AM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
KarlGamer
none
Ruston LA

Registered: Oct 2003
Posts: 6

isn't this similar

to the math discoverd by they guy who "a Beautiful Mind" was based off of.

__________________
Sincerly your brother in arms

Report this post to a moderator | IP: Logged

Old Post 02-13-2004 08:25 AM
KarlGamer is offline Click Here to See the Profile for KarlGamer Click here to Send KarlGamer a Private Message Click Here to Email KarlGamer Edit/Delete Message Reply w/Quote
Adrian German
Indiana University
Bloomington, IN

Registered: Mar 2005
Posts: 19

The Easter Egg Problem: Towards a Space of Solutions

Here are the details of my NKS 2007 presentation, two weeks ago, on the Easter Egg problem:

http://www.cs.indiana.edu/~dgerman/.../dgerman-eep.nb

Save this as something.nb, then open it with Mathematica 6, it's set up as a slide show.

The abstracts (only) are posted on the conference website:

http://www.wolframscience.com/confe...rianGerman.html

Obviously, I'd be more than eager to answer any questions with respect to the approach I presented.

Also I'm interested in any joint development of subsequent rule spaces and/or experiments with anyone interested.

Report this post to a moderator | IP: Logged

Old Post 07-30-2007 04:24 PM
Adrian German is offline Click Here to See the Profile for Adrian German Click here to Send Adrian German a Private Message Click Here to Email Adrian German Visit Adrian German's homepage! Edit/Delete Message Reply w/Quote
Adrian German
Indiana University
Bloomington, IN

Registered: Mar 2005
Posts: 19

A plain HTML version of my notebook:

http://www.cs.indiana.edu/~dgerman/...dgerman-eep.htm

This, for those who might want to peruse it but don't have Mathematica 6 just yet.

Report this post to a moderator | IP: Logged

Old Post 07-31-2007 05:37 AM
Adrian German is offline Click Here to See the Profile for Adrian German Click here to Send Adrian German a Private Message Click Here to Email Adrian German Visit Adrian German's homepage! Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

I appreciate the work involved in setting up these agents. It is also useful that you have put some definite rules on the problem, which gets it closer to having real better and worse solutions.

I think it is necessary to specify the problem further, however, to get to real distinguishability among algorithms meant to attack the problem, without getting stuck on the interdeterminacy over possible problems issue raised previously. There was also at least one point in how you approached it that I thought represented a change from Danner's initial idea, that I would put back as Danner had it.

The last point first. Danner presents the problem as one of efficient search, and the point is therefore to maximize found "eggs" while minimizing whatever contributes to search cost. Your agents, however, steer around "eggs" as obstacles. I think Danner meant to allow an agent to walk right over an egg and to "score" that egg as "found" if and when the walker reaches the egg - without any local vision I might add. In other words, a square is searched if and only if an agent lands on that square, and a searched egg square counts positively in score.

Since time is to be minimized, presumably a step costs in score. It is not specified whether having more agents costs but presumably it must. The simplest version of a cost across agents would be for each agent-step to have the same cost regardless of the number used.

One might later play with the relative cost of one agent taking twice as long compared to two agents taking half the time, etc, but it is a place to start. Similarly, one might imagine allowing multiple start locations, but your restricting the question to a single start location is fine as an arbitrary initial simplification. The same goes for allowing diagonal vs. "straight" movements - you effectively allow them and I think that is fine.

Then there is the question whether the size of the search space is known, whether the number of eggs present is known, and a key parameter, the value of finding all eggs compared to the size of the space to be searched. Comparing explicit algorithm simulations without specifying this value (even an expectation, rather than a fixed value) seems to me rather arbitrary, because for some values of this parameter the optimal search strategy is not to search at all.

To see this imagine a single egg placed on a grid a thousand high by a thousand wide, with the egg having a value of 1, and the search cost per agent-step also set as 1. The chance that the lone egg is on one of the 8 squares adjacent to the walker start is 8/999999, and the expectation of breaking even with a single walker moving at all is only 1/999999, and this expectation cannot be improved upon by any algorithm. Not "paying" any walker-steps is the correct solution, because the expected value of the search, by any process, is negative.

Thus we immediately see that for the problem of how to search to really arise, it must first be determined whether it is worth searching, and furthermore we see that any plausible algorithm that is going to be "best" at the problem (memory and on-line learning both being explicitly allowed) is going to re-assess the expected value of continuing its search, and halt when that expected value goes negative. To see this in a trivial way, any algorithm that keeps looking after finding what is known to be the last egg is going to be dominated by another that uses the same search but also stops after finding the last egg.

This point is meant to focus attention on the fundamental sequential nature of the problem. At least on my new simplifying condition, that the maximal search area and egg number are known (at least in expectation, but for simplicity I will start with known outright).

I also point it out because whether a space filling but slow approach is useful or wasteful depends in large part on whether the expected value for an exhaustive search is positive or negative. Naturally, one algorithm can still beat another even if both have positive expected value. But it simplifies possible approaches enourmously to be able to reject negative expectation approaches, and it also tells one when to stop.

If one wants to explore the region in which only the relative performance matters, one just sets the value of finding anything at all, higher than the cost of searching all locations. Only truly pointless back and forth movement would then have negative expected value etc.

If the eggs are distributed truly uniformly in a grid of known size (with no uncertainty as to where its borders are, I mean), then (for eggs and square both fixed) the expected value from each step onto a previously unsearched square is the same. It is not necessary to simulate placing the eggs and probabilistically finding or not finding them to discover the expectation; it can be found analytically.

You are drawing balls from bins, with some of the balls red and others white, the red balls valuable and the white having a cost.

There is only one other element of structure left before the problem is reduced to a combinatorial matter. Search of the more distant squares is in principle always more expensive than search of the nearby ones, because even the fastest path to the distant ones must turn over a number of intervening squares, paying for each.

But a uniform placement of the eggs means there can be no countervailing higher value to searching them early rather than late. (Note that this would change if the egg boundary were unknown). Having a higher minimum cost and the same expected gain, they will be searched after the near ones as a matter of course, by any "best" algorithm, so they will always be well above their minimum cost anyway.

So any path that avoids stepping on itself and traces through all the squares in sequence, is equally good. The only remaining issue is when to stop.

The expectation for the first square stepped on is (number of red * value of red)/total squares - agent-step cost. It then draws a red or it does not. Drawing a red reduces the number of red in the numerator product by one. The number of remaining squares in the denominator is reduced by one regardless. If the first term is larger than the second term (the agent-step cost I mean) after that change, take another step, else halt.

Since I add to my expectation whenever I do take a step and never take a step that reduces it, the expectation is guaranteed to be positive. Note that this does not mean my score is guaranteed to be maximal - hitting or missing is still a stochastic process and ordinary sampling theory would apply. And there can be a positive overall expectation even when the next step has a negative one. But this is well understood ground.

We can also immediately see that using more agents at once may speed the search but must lower the expectation, if the cost of each agent-step is the same regardless of the number of agents used. Because each agent-step is like the procedure described above, except that the agents after the first cannot decide to take their step or not based on the information found by the previous agents "on the same turn". So the sequential rather than parallel expectation will be strictly (though usually, quite marginally) larger. For multiple agents to be useful, the cost per agent-step has to fall (say, because total steps are more important as a cost than agent-steps) as the number of agents used increases - or there has to be some greater prior information about the placement of the eggs.

I go through all of this to show, again, that priors about costs on the one hand, and about how the eggs are laid down on the other, are absolutely fundamental to one spatial search algorithm being superior to another in any meaningful sense. (Self avoidance to avoid costs with no possible gain, are the only exception, and that's easy to arrange in an arbitrarily large number of ways). With only priors on costs, you get a typical sequential optimization problem, but one that has no meaningful spatial component.

With priors on the egg placement algorithm (beyond it being flat random), some spatial routines can be meaningfully superior to others. For example, if the eggs have different "sizes" (and corresponding value and ease of finding) or are known to be clumped, or are laid down according to a normal distribution with an unknown center location rather than a purely flat equal probability in every square - then spatial search routine A may be meaningfully better than spatial search routine B.

If there are a variety of known possible egg placement sources, and which really placed them is unknown, then there is still an answer, one that will incorporate on line learning to try to determine what algorithm placed the eggs. But if it is known a priori that they were placed flat-random, then it reduces to a purely sequential, not a spatial, problem.

If it is spatial search that is of theoretical interest here, therefore, somebody has to specify what spatial placement rules are supposed to be solved for. The pure random one is basically already solved, but the solution is not spatial. It is just sequential. No spatial information reduces a spatial problem to a sequential one.

I hope this is useful, and I am sorry it does not make more use of the approach you have worked on. I think what you are doing may well be useful for addressing the stated problem. But only after that problem has been specifically narrowed enough to start having real answers that do depend on spatial search patterns.

Report this post to a moderator | IP: Logged

Old Post 08-01-2007 06:04 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Attached is a simple notebook illustrating the purely sequential version of the problem, when spatial information is non-existent.

I think the spatial version is a richer and more interesting problem, but to move to it requires real priors on the spatial distribution of the "eggs".

Attachment: sequentialsearchpayoffs.nb
This has been downloaded 1607 time(s).

Report this post to a moderator | IP: Logged

Old Post 08-01-2007 09:08 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Adrian German
Indiana University
Bloomington, IN

Registered: Mar 2005
Posts: 19

Originally posted by Jason Cawley
There was also at least one point in how you approached it that I thought represented a change from Danner's initial idea, that I would put back as Danner had it.

The last point first. Danner presents the problem as one of efficient search, and the point is therefore to maximize found "eggs" while minimizing whatever contributes to search cost. Your agents, however, steer around "eggs" as obstacles. I think Danner meant to allow an agent to walk right over an egg and to "score" that egg as "found" if and when the walker reaches the egg - without any local vision I might add.


I think that's exactly right. I wasn't entirely sure, and consequently I couldn't convince myself if I should allow the agents to pick up the eggs (like in a real egg hunt) or if the eggs are more like land mines or oil deposits---which are not to be picked up, only identified for further processing.

I think all your points are well taken and a serious analysis must address them (all) thoroughly. I am looking through the 2,048 rule sets now and entirely separately from this problem I am quite curious to see how these rules actually behave in an empty space. Perhaps there's at least one ruleset that behaves in a strange and unique way... If there's anything interesting in what I see I plan to report it.

Back to the Easter Egg problem: if a complete and unequivocal analysis can be made I think we should strive to achieve it. Otherwise perhaps we can set up some benchmark configurations and rank the algorithms accordingly. These could also help in narrowing down the problem specification even further. We might even have a whole range of Easter Egg problems as the various paremeters in the specification are varied.

Thanks a lot, all the very best.

Report this post to a moderator | IP: Logged

Old Post 08-02-2007 04:39 AM
Adrian German is offline Click Here to See the Profile for Adrian German Click here to Send Adrian German a Private Message Click Here to Email Adrian German Visit Adrian German's homepage! Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Sure, let's work at it from both ends.

I agree that your minimal searching agents are of independent interest. It would be interesting to see how they evolve with multiple agents on a field with numerous obstacles, or from different starting positions, or both - for example.

Meanwhile I present my solution to the sequential rather than spatial version of the problem. Sequential stopping problems are a well developed field. Usually they assume more statistical independence than selection from a fixed population involves, but it is easy enough to address the selection-search version by simulating on a sample, and putting that in a well developed framework.

A stopping problem involves finding a decision rule to tell me whether to keeping searching on the next unknown tile, or to instead accept the value I have already reached, and halt.

Here the parameters are NRed, the number of eggs effectively, NGray, the total number of cells that do not contain eggs, and RedValue, the value of finding each egg - with the cost of searching a square set to 1 (without loss of generality).

The solution tells you whether or not to search at all, whether to keep searching until the last red is found, or the usually better solution, to keep searching if your current value is below a threshold level, and to stop as soon as you are above that level. It tells you what threshold level to use, and also shows a (simulated) expected value associated with each of a set of possible thresholds.

The attached notebook runs everything, with the function called solution the main routine. Illustrations show the other subroutines working, and text explains the problem set up.

If it is known a priori that there is no spatial pattern to the egg placement (known to be flat distribution random), and the size of the space, number of eggs, and value of the eggs are all known, then that is a proper rational decision procedure for the easter egg problem. The order in which the cells are searched is in that case a matter of indifference, any path that does not step on itself and eventually reaches all squares suffices, using that decision procedure to decide when to halt.

I hope that this example of solving it outright with those assumptions will motivate people to add more spatial structure to the egg placement. E.g. clumped placement in a known spectrum of sizes, or a normal distribution around an unknown center point, etc.

Attachment: sequentialstopping.nb
This has been downloaded 1472 time(s).

Report this post to a moderator | IP: Logged

Old Post 08-03-2007 12:12 AM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Adrian German
Indiana University
Bloomington, IN

Registered: Mar 2005
Posts: 19

Just a quick comment on how my pictures should be interpreted, for example:

http://www.cs.indiana.edu/~dgerman/...es/image040.gif

The picture has two interpretations, both meaningful:

a) my original meaning was that eggs are shown in red and (my interpretation here) they can't be moved, they are like land mines. Note there are no obstacles per se in the picture, just the agent and the targets (eggs). If we had obstacles they would influence the search in much the same way as the eggs do, but would be shown in a different color.

b) if we allow the agent to pick up the eggs, then it doesn't matter where the eggs really are (were), since once we reach them we also pick them up (and vanish them) and the search is not slowed down in any way. In that case the red dots in the picture should be interpreted as permanent non-egg obstacles. Those, obviously, would hinder the search just as the type of eggs from the previous case would. My agents would keep steering around them as shown. So we can see both worlds in this picture. But, more importantly this means that for the original Danner problem the search strategy presented is independent of the spatial configuration of targets (eggs) its efficiency only depending on the spatial aspect of the obstacles in the region.

Finally I want to point out two drawbacks of my current method (which I plan to address soon):

a) I have no stopping procedure, so with the current rules things go on forever, and

b) the border is kept equally attractive for agent movement by the current rules and color (state) conventions. Thus especially in the later stages of the search a lot of time is spent investigating portions of the border that could in fact be determined as being "done" (no pattern applies and the last time a pattern did apply was a long time ago.)

The situation is somewhat reminiscent of bubble sort where local decision rules produce global optimization (sorting) while also providing a way to determine when the procedure should end.

Meanwhile, thanks Jason for the solution to the sequential version of the problem---I plan to read it carefully very soon.

Last edited by Adrian German on 08-04-2007 at 02:33 PM

Report this post to a moderator | IP: Logged

Old Post 08-04-2007 04:16 AM
Adrian German is offline Click Here to See the Profile for Adrian German Click here to Send Adrian German a Private Message Click Here to Email Adrian German Visit Adrian German's homepage! Edit/Delete Message Reply w/Quote
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Just a brief comment on an obvious extension of my previous result. It is not required that the number of eggs be absolutely known, nor that the value of each egg be known and fixed.

If there is uncertainty in the number of eggs, but the distribution in the number there will be is known, that is sufficient. One simply gets the expected return curves for each of the possible number of eggs and adds those curves (or more generally, weights each by the probability of that number of eggs), and then asks for the maximum of the resulting aggregate curve.

Or, if robustness to this unknown is more critical than absolute maximization of the expected return, one can pick a threshold that works well over most of the possible number of egg possibilities. Again one simply runs the code once for each of the possible "egg numbers", and keeps the whole threshold to payoff curve, not just its maximum point.

Giving the value of the eggs a random component (but about a known expectation) will increase the variance seen, of course, but will not change the expectation. It might also require a larger sample to be confident in the threshold value selected, though.

The method generates samples internally, so the standard deviation of the returns could be tracked as well as the expectation, if e.g. one wanted to find a risk adjusted best rather than a pure expectation best solution.

None of these generalizations will introduce a real spatial component to the problem, however. That will only enter if there is real spatial information about egg placement, accessible in principle in the course of the problem or known a priori. It is the knowledge that each square is as good as any other, provided by a prior stipulation that egg placement is flat-random, that collapses the problem to the pure sequential version.

The next obvious step, therefore, is to specify some realistic ways of relaxing that assumed flat random placement. Here, input from those with realistic versions of the problem in mind, is required. Assuming there is some structure to the placement of the eggs, what are the plausible types of structure for real instances of the problem? Is e.g. is a normal distribution around an unknown center point a realistic version of the problem? Is clumped placement i.e. preferential placement near any existing prior egg, realistic?

Report this post to a moderator | IP: Logged

Old Post 08-06-2007 09:24 PM
Jason Cawley is offline Click Here to See the Profile for Jason Cawley Click here to Send Jason Cawley a Private Message Edit/Delete Message Reply w/Quote
Post New Thread    Post A Reply
  Last Thread   Next Thread
Show Printable Version | Email this Page | Subscribe to this Thread


 

wolframscience.com  |  wolfram atlas  |  NKS online  |  Wolfram|Alpha  |  Wolfram Science Summer School  |  web resources  |  contact us

Forum Sponsored by Wolfram Research

© 2004-14 Wolfram Research, Inc. | Powered by vBulletin 2.3.0 © 2000-2002 Jelsoft Enterprises, Ltd. | Disclaimer | Archives