Wolfram Science Group
Phoenix, AZ USA
Registered: Aug 2003
I appreciate the work involved in setting up these agents. It is also useful that you have put some definite rules on the problem, which gets it closer to having real better and worse solutions.
I think it is necessary to specify the problem further, however, to get to real distinguishability among algorithms meant to attack the problem, without getting stuck on the interdeterminacy over possible problems issue raised previously. There was also at least one point in how you approached it that I thought represented a change from Danner's initial idea, that I would put back as Danner had it.
The last point first. Danner presents the problem as one of efficient search, and the point is therefore to maximize found "eggs" while minimizing whatever contributes to search cost. Your agents, however, steer around "eggs" as obstacles. I think Danner meant to allow an agent to walk right over an egg and to "score" that egg as "found" if and when the walker reaches the egg - without any local vision I might add. In other words, a square is searched if and only if an agent lands on that square, and a searched egg square counts positively in score.
Since time is to be minimized, presumably a step costs in score. It is not specified whether having more agents costs but presumably it must. The simplest version of a cost across agents would be for each agent-step to have the same cost regardless of the number used.
One might later play with the relative cost of one agent taking twice as long compared to two agents taking half the time, etc, but it is a place to start. Similarly, one might imagine allowing multiple start locations, but your restricting the question to a single start location is fine as an arbitrary initial simplification. The same goes for allowing diagonal vs. "straight" movements - you effectively allow them and I think that is fine.
Then there is the question whether the size of the search space is known, whether the number of eggs present is known, and a key parameter, the value of finding all eggs compared to the size of the space to be searched. Comparing explicit algorithm simulations without specifying this value (even an expectation, rather than a fixed value) seems to me rather arbitrary, because for some values of this parameter the optimal search strategy is not to search at all.
To see this imagine a single egg placed on a grid a thousand high by a thousand wide, with the egg having a value of 1, and the search cost per agent-step also set as 1. The chance that the lone egg is on one of the 8 squares adjacent to the walker start is 8/999999, and the expectation of breaking even with a single walker moving at all is only 1/999999, and this expectation cannot be improved upon by any algorithm. Not "paying" any walker-steps is the correct solution, because the expected value of the search, by any process, is negative.
Thus we immediately see that for the problem of how to search to really arise, it must first be determined whether it is worth searching, and furthermore we see that any plausible algorithm that is going to be "best" at the problem (memory and on-line learning both being explicitly allowed) is going to re-assess the expected value of continuing its search, and halt when that expected value goes negative. To see this in a trivial way, any algorithm that keeps looking after finding what is known to be the last egg is going to be dominated by another that uses the same search but also stops after finding the last egg.
This point is meant to focus attention on the fundamental sequential nature of the problem. At least on my new simplifying condition, that the maximal search area and egg number are known (at least in expectation, but for simplicity I will start with known outright).
I also point it out because whether a space filling but slow approach is useful or wasteful depends in large part on whether the expected value for an exhaustive search is positive or negative. Naturally, one algorithm can still beat another even if both have positive expected value. But it simplifies possible approaches enourmously to be able to reject negative expectation approaches, and it also tells one when to stop.
If one wants to explore the region in which only the relative performance matters, one just sets the value of finding anything at all, higher than the cost of searching all locations. Only truly pointless back and forth movement would then have negative expected value etc.
If the eggs are distributed truly uniformly in a grid of known size (with no uncertainty as to where its borders are, I mean), then (for eggs and square both fixed) the expected value from each step onto a previously unsearched square is the same. It is not necessary to simulate placing the eggs and probabilistically finding or not finding them to discover the expectation; it can be found analytically.
You are drawing balls from bins, with some of the balls red and others white, the red balls valuable and the white having a cost.
There is only one other element of structure left before the problem is reduced to a combinatorial matter. Search of the more distant squares is in principle always more expensive than search of the nearby ones, because even the fastest path to the distant ones must turn over a number of intervening squares, paying for each.
But a uniform placement of the eggs means there can be no countervailing higher value to searching them early rather than late. (Note that this would change if the egg boundary were unknown). Having a higher minimum cost and the same expected gain, they will be searched after the near ones as a matter of course, by any "best" algorithm, so they will always be well above their minimum cost anyway.
So any path that avoids stepping on itself and traces through all the squares in sequence, is equally good. The only remaining issue is when to stop.
The expectation for the first square stepped on is (number of red * value of red)/total squares - agent-step cost. It then draws a red or it does not. Drawing a red reduces the number of red in the numerator product by one. The number of remaining squares in the denominator is reduced by one regardless. If the first term is larger than the second term (the agent-step cost I mean) after that change, take another step, else halt.
Since I add to my expectation whenever I do take a step and never take a step that reduces it, the expectation is guaranteed to be positive. Note that this does not mean my score is guaranteed to be maximal - hitting or missing is still a stochastic process and ordinary sampling theory would apply. And there can be a positive overall expectation even when the next step has a negative one. But this is well understood ground.
We can also immediately see that using more agents at once may speed the search but must lower the expectation, if the cost of each agent-step is the same regardless of the number of agents used. Because each agent-step is like the procedure described above, except that the agents after the first cannot decide to take their step or not based on the information found by the previous agents "on the same turn". So the sequential rather than parallel expectation will be strictly (though usually, quite marginally) larger. For multiple agents to be useful, the cost per agent-step has to fall (say, because total steps are more important as a cost than agent-steps) as the number of agents used increases - or there has to be some greater prior information about the placement of the eggs.
I go through all of this to show, again, that priors about costs on the one hand, and about how the eggs are laid down on the other, are absolutely fundamental to one spatial search algorithm being superior to another in any meaningful sense. (Self avoidance to avoid costs with no possible gain, are the only exception, and that's easy to arrange in an arbitrarily large number of ways). With only priors on costs, you get a typical sequential optimization problem, but one that has no meaningful spatial component.
With priors on the egg placement algorithm (beyond it being flat random), some spatial routines can be meaningfully superior to others. For example, if the eggs have different "sizes" (and corresponding value and ease of finding) or are known to be clumped, or are laid down according to a normal distribution with an unknown center location rather than a purely flat equal probability in every square - then spatial search routine A may be meaningfully better than spatial search routine B.
If there are a variety of known possible egg placement sources, and which really placed them is unknown, then there is still an answer, one that will incorporate on line learning to try to determine what algorithm placed the eggs. But if it is known a priori that they were placed flat-random, then it reduces to a purely sequential, not a spatial, problem.
If it is spatial search that is of theoretical interest here, therefore, somebody has to specify what spatial placement rules are supposed to be solved for. The pure random one is basically already solved, but the solution is not spatial. It is just sequential. No spatial information reduces a spatial problem to a sequential one.
I hope this is useful, and I am sorry it does not make more use of the approach you have worked on. I think what you are doing may well be useful for addressing the stated problem. But only after that problem has been specifically narrowed enough to start having real answers that do depend on spatial search patterns.
Report this post to a moderator | IP: Logged