A New Kind of Science: The NKS Forum > Applied NKS > classical equations of motion
Author
Joe06880

Registered: Dec 2005
Posts: 3

classical equations of motion

I do classical simulations of the phase space coordinates of small molecules, using an approximate symplectic integrator solution to Hamilton's equations of motion. I would love to try simulations using NKS, but do not know where to start. How could I begin to, for instance, probe the dynamics and phase space structure of a small molecule using a CA? Using these simulations, how could I obtain a statistical value such as a rate constant for bond cleavage obtained by sampling over an appropriate range of random initial conditions?

Report this post to a moderator | IP: Logged

12-29-2005 06:30 PM
Jason Cawley
Wolfram Science Group
Phoenix, AZ USA

Registered: Aug 2003
Posts: 712

Sorry it took a while to get back to you - I was busy over the holidays.

There is no reason to reinvent the wheel, and if you have a problem you can solve analytically with a DE then obviously you just go ahead and solve it analytically. We make Mathematica, we are not exactly unaware of which DEs can and cannot be solved, or uninterested in their use when they work. For simple enough molecules, minimal energy configurations are like that. For more complicated molecules, they aren't.

A classic case of a minimal energy type problem approached in an NKS way is the Ising model for spin glasses. Which is a minimal energy problem, with energies associated with neighboring spins being aligned or misaligned. While exact solutions can be found for regular lattices in 2D, there is no analytic solution in 3D, so CA style simulation is used instead.

Other examples of seemingly simple situations without analytic solutions are anharmonic oscillators with QM, or the quantum helium atom. When you hit the wall for full analytic solutions with one atom, and that the second simplest element, you get some sense of how much idealization or simplifying symmetries etc are necessary for analytic DE methods in realistically complicated cases. A note on page 1133 discusses some cases where analytic solutions have been found.

In the case of minimal energy configurations for complex molecules, the general problem is known to be NP complete. See the note on page 1146. The same is true for finite region spins in Ising models if there is any magnetic field, though the problem is in P if only in 2 dimensions and with no magnetic field.

It is not obvious real physical systems manage to attain minimal configurations in such cases. The dynamics may instead determine which of several configurations close to each other in total energy are actually reached, and these may be energetically isolated from one another (local minima, getting stuck in one "fold" and can't "shake" to another). People working on protein folding are well aware of all these issues.

Once you realize you have a computationally intractable problem and dynamics are going to matter, you are pretty much forced into simulation of one kind or another as a method of attack. Either that, or you restrict yourself to simpler situations, e.g. with extra symmetries, constraints, or missing effects, in order to restrict the problem to a more tractable one, and then consider the solvable simple situation an approximation to the real, analytically intractable one.

In these cases, the NKS approach is to simulate rather than approximate, with the idea that a dynamic update rule may be a much more straightforward model of the real process. As in an Ising model, all aspects of the problem distinct from the computationally intricate bit may be idealized away to get a computationally clean set-up - one could call that the NKS version of approximation.

If none of those complexity issues arise in your particular problem, though, then of course you just use OKS and DE mathematics and get a full analytic solution. NKS suppliments OKS, it does not replace it. OKS works fine for what it can do.

As for building an NKS style rule for a simulation of a system you understand, there is a basic procedure, but the details will depend on your problem and what effect you are trying to capture.

- First you need some data structure that encodes a state of the world. As much as needed to capture the effect you are modeling, preferably with as little beyond that as possible.

- Then you need a single step function that maps one state of the world to another state of the world, closed, meaning same type of data structure before and after. It may have any number of external parameters and anything about the prior state as its arguments.

- The main NKS point here is usually to come up with some whole range of plausible ways the step might operate, a family of such functions rather than just one. And to enumerate these, associating each with some sort of rule number. Each consistent with what you know about the micro-causes involved, but not stipulating too much beforehand.

- Then you NestList or FoldList the step function, to get an evolution rule, leaving behind a chain of data structures each of which is a state of the world.

- You also want some easy way to visualize the resulting data, so you can screen large amounts of it quickly. Other search functions or analysis functions that compress data from multiple runs are frequently added at this stage as well.

For every rule in the step function enumeration, for every set of parameter values including initials, you then get a possible evolution. Naturally you need your candidate step functions to all satisfy everything you know about your system, and the state of the world structure must track whatever you care about as a result as well as all the real drivers.

As for how you use a simulation model once you have it, if you want statistical properties under a range of initials you run large batches of simulations and do ordinary statistics on this or that measure applied to the results. You may find nice single peaks or you may find some regions of initials or parameter values give quite different results than others, and should not be thrown together and averaged, but instead distinguished. If you can look at the data directly, such distinct regions of behavior are often obvious. When they are a bit harder to notice, standard methods like cluster analysis can help spot such regions in parameter space. The details depend on the problem.

Sometimes you can prove mathematically results about average or limit behaviors, exploiting effective independence of one sort or another. Which is already done in many places in statistical mechanics. But in general, there is no assurance that the details won't matter or can always be safely averaged away. Your simple update rule may exhibit any of the range of simple to complex behaviors seen for all simple programs, some falling into fixed points, others cycling predicably, others showing ongoing fluctation within a bounded region, etc. You have to simulate it and see.

I hope this helps.

Report this post to a moderator | IP: Logged

01-05-2006 06:35 PM

wolframscience.com  |  wolfram atlas  |  NKS online  |  Wolfram|Alpha  |  Wolfram Science Summer School  |  web resources  |  contact us