algoTrade with raspberryPi

algoTrade

tradeRunner is trading platform I have been developing since the beginning of the year. At the beginning the idea was preparing order-automation tool for a friend, then introduced one algorithm for finding repeated trade patterns. For testing and improving algorithm performance started to develop extra modules, which at the end turn to be full functional trading platform.

Except “numPy” and “pandas” all libraries are created from scratch for the platform . It was not very efficient way as there are already solid platforms. Still I decided to go more experimental and fun way. Above, is the simplified pipeline of the model.

main modules>

  • tradeNet comunication module with markets & archives historical data in regular intervals.
  • tradeCore take market data(live or backtest), predict pattern
  • tradeRunner excecutes tradeCore in realtime market
  • tradeBackTester  runs tradeCore with historical data, and measures the performance of algoritm
  • tradeEvo tries to improves algorithms performance by altering it’s config parameters

dataFlow> communication between modules are achieved with fallowing data blocks and files

  • liveMarketData
  • BackTestArch
  • periodicCheck
  • lliveLedger
  • algoConfig

performance output>

  • performanceReport back test reporting
  • evolutionChart evolution algoritm with different input-output comparison
  • performanceChart run algoritm on selected timeline

evolutionAlgorithm>

platform is using evolution algoritm(EA) to maximise it’s performance. Current one is 2dimensional-EA (altering one input, checking performance change, finding highest performing parameters).  However as core algorithms using multi dimensional input, it is not very optimised. Currently I am trying to to develop new model on, which will be handles on next post

hardware>

piStack.jpg
pi stack
  • practice-live system is runing under raspbery pi 3b, with 1GB ram and 1.2GHz 4 core cpu.
  • real-live system is runing under raspbery pi 3b+, with 1GB ram and 1.4GHz 4 core cpu.
  • backTester and evolution algoritms are running with PC, i7-4790k CPU, 32 gbRam

On PC there is no limitation of simultaneous running algorithms, however raspberry-pi has limited CPU & memory capasity. Code optimization and having additional fans, improved the setup. However memory is still issue. will run second optimization for decreasing memory purposes.

pis.JPG
practice and live TR in  raspberyPi 3b &3b+

FutureImprovements>

  • additional pairs will be analysed with BT module
  • multi dimensional EA will be introduced
  • patern recognition and changing algoPaterns will be automated, instead of running EA modules and analyzing data manually
  • UI will be improved.
  • risk management will be improved,
  • error correction protocols will be improved,

EvolutionBox 1.2.0


EvolutionBox Overview

EvolutionBox is a Unity-based simulation where I explore evolution as the key mechanism for learning. The first steps focus on creating agents that interact with their environment and evolve over time. In future updates, I’ll dive deeper into pattern recognition and self-learning.


How It Works

The simulation mimics three core principles of evolution:

  1. Variation: Each agent has unique internal traits (defined by their genes).
  2. Mutation: When agents reproduce, their offspring inherit their genes with slight changes (mutations).
  3. Natural Selection: Only agents with the most suitable traits survive and reproduce, passing their genes to the next generation.

Agent Goals:
Every agent starts with the same set of traits and two main objectives:

  • Survive (by finding food).
  • Reproduce (by mating and having offspring).

Each new generation carries mutated traits, and the agents best adapted to their environment are more likely to survive and pass on their genes.


Genetic Traits

Agents inherit the following traits:

  1. Food Priority: How important food is for survival.
  2. Mating Priority: How much the agent prioritizes reproduction.
  3. ChillOut Priority: A measure of laziness or energy-saving behavior.
  4. Survival Level: How brave the agent is (e.g., risking survival for other goals).
  5. Energy Transfer: How much energy a parent willingly gives to its offspring (think “good parenting”).

How Agents Choose Actions

Agents decide what to do based on their priorities and current needs. They calculate the “weight” of each possible action and choose the one that feels most urgent. For example:

  • Find Food: When hungry, agents look for food.
  • Find a Mate: Agents search for mature partners when ready to reproduce.
  • Chill Out: Rest and save energy when they’re not in immediate need.
  • Death: Agents die if they run out of food or grow too old.

Passing Traits to Offspring

While agents’ decisions change depending on their situation, their core priorities are set at birth. Each new offspring inherits priorities from their parents, with a mutation factor that adds variability.

  • Food, mating, and chillOut priorities are linked, so when one increases, the others may decrease.
  • Survival and energy transfer traits mutate independently.

Over time, this process creates agents better adapted to their environment.


This simulation demonstrates how simple rules of evolution can create complex behaviors

.

————————————————————————–

Results:image (1)

After 200,000 cycles of evolution under default parameters, here’s what the data shows:

  1. Mating vs. Food Priority:
    Agents gradually prioritize mating over food and transfer more energy to their offspring over time. This behavior leads to shorter lifespans, as agents are more likely to starve to death. However, they produce more offspring, passing these traits to the next generation. Essentially, the survival of their genes becomes more important than their own survival.
  2. Laziness (ChillOut Priority):
    Traits associated with laziness are not favored by evolution, as they decrease survival chances and reproductive success.
  3. Energy Transfer vs. Natural Death:
    The relationship between energy transfer and natural death is evident in the data. As energy transfer increases, more agents die of starvation. However, their offspring are better equipped to survive and reproduce, ensuring the continuation of their genetic line.
  • image

Next Steps

1. Adding Pattern Recognition:

Currently, agents have basic pre-programmed knowledge about categories like “self,” “other agents,” and “food.” For example, if an object is labeled as “food,” the agent inherently knows it can eat it.

In the next version, I aim to integrate a neural network system for each agent. This system will allow agents to:

  • Learn and classify objects in their environment without prior knowledge.
  • Identify patterns like:
    • “If the object is green, being near it increases my food level, so it must be food.”
    • “If the room is green, there’s likely food nearby. If it’s red, there’s a higher chance of encountering danger.”

Over time, agents will draw conclusions from these patterns, improving their survival skills. Some of this learned knowledge may even be passed down genetically, resulting in smarter agents overall.


2. Introducing More Interaction:

The current version focuses on instinctive behaviors like searching for food and mating. Future updates will introduce:

  • Cooperation vs. Competition:
    Agents could develop cooperative or competitive behaviors based on resource availability. For example:

    • Cooperation may be driven by oxytocin-like hormones, leading to energy-sharing behaviors and small social groups.
    • Competition could stem from testosterone-like hormones, encouraging territorial aggression and dominance.
  • Selective Mating:
    Male agents could compete for mates, while females may prioritize selecting partners with the best genetic traits. This behavior would refine the gene pool, ensuring stronger, more adaptable offspring.

Some nice references:

http://www.vice.com/read/sorry-religions-human-consciousness-is-just-a-consequence-of-evolution

http://faculty.philosophy.umd.edu/pcarruthers/Evolution-of-consciousness.htm

http://www.independent.co.uk/news/science/insects-are-conscious-claims-major-paper-that-could-show-us-how-our-own-thoughts-began-a7002151.html

http://spectrum.ieee.org/automaton/robotics/robotics-software/bizarre-soft-robots-evolve-to-run

http://www.huffingtonpost.com/the-conversation-us/evolving-our-way-to-artif_b_9183434.html

Feather Creator

 

featherCreator.jpg

After couple of bird rigging projects i prepared wing & feather automation tool. It seems like most of the birds have same pattern of feather segments with variation on number of feathers in these segments.

Intro ck_featherCreate_1.1:

ck_featherCreate is maya rigging tool for wing and feather automation.

How to  install:

  • copy scripts to    > scrips directory E.g.// Documents\maya\2017\prefs\scripts
  • copy icons to       > icons directory   E.g.// Documents\maya\2017\prefs\scripts
  • copy textures to  > scene project directory
  • run command     >  ck_featherCreateUI

featherCreatorPipeline.jpg

Working with Base scene:

  • base scene is not prerequisite for running the script however it is easier  to start using the setup. Load the scene, curves and wing are already placed.
  • open UI by running command,  “ck_featherCreateUI”
  • on UI “offs” controls distance between feather groups, “Ins” controls rotation between each group, “rand”  controls rand scale variation between individual groups
  • Ui comes with already defined curve segments and wing root for left side.  Press buttons in this order>  +createFeathers+ createRig +cleanUp
  • for right side select and define curve segments and wing root joint on UI. Press buttons in same order. +createFeathers+ createRig +cleanUp

ck_featherCreateUI

How script works(underHood):

  1. from defined Curves create mid-curves(via weighted BS)
  2. on curves , loft surfaces, and on surface create follicles. Each follicle control one cvCurves control-point
  3. Each Curve Drive joint-chain Ik handle system. twist will be achieved by base and tip surface tangent. Feathers are scattered according to feather type and curve length.
  4. Surface is controlled by wigs root joints. Each control has parent switch for folding feature.

Limitations & future Improvements>

  • Folding is still experimental. Wing parent switches should be enough for folding. However generally it may require extra blend-shapes on attachment surfaces.
  • mirroring can be automated, On current version (1.1) mirroring is done by redefining guide curves.
  • On current version (1.1) rigging should be done on flat T posed wing. On posed wing rig still works correctl however feather alignments may be problematic. On next version there can be intermediary stage between feather creation and rigging for extra alignment control.
  • this alignment system can be kept after rigging  for secondary control on folding.
  • as most of the birds have same pattern of feather segments with variation on number of feathers in primary and secondary row rig should be flexible on wide range of wing types. However on some birds, secondary coverts becomes two layers of feathers instead of one ( reference Image,  bottom-right, chicken wing).  Custom wing segments can be added next release.

Version History

  • 1.0 first working version
  • 1.1 cleanUp function added, group nodes for cleaner outline

References:

featherTypes.jpg

 

machine consciousness

     For a while i am trying to make basic arduino  robot, interacting with its environment .  Built some versions which are doing pretty well with it’s given purposes. taking data from environment and act accordingly . However decision making process of the robot is a bit frustrating. It is pretty linear , predictable and  all decisions was based on predefined rules.  Now the hardware of robot version 0.5 is ready , and before make it “alive” Im looking for a solution to make more self-learning , environment-adaptive and in a level “conscious-robot” instead of full mechanical machine. Surely  without loosing all for fun part or get lost in super technical details.  With a small group of my friends from different background  started to make little workshop about this topic recently. In this post I will try to summarize our starting point, objectives and future experiment plans.
      Current methods limitation
     Lets start with current decision making process’ limitation. One problem is adapting to Environment: We observe it in some instinctive machines in nature. there are some interesting experiments about wasps and their instinctive behaviors. Wasp finds a prey , paralyze it, injects his single egg on prey and put to nest. Wasps inspect its prey several times than seals it.  One of Fabre’s experiments (1915) he completely remove prey from nest. Wasp check the nest finds out something missing out but still sealed the nest as if there was prey in it.  He concluded his experiment as; “Instinct knows everything, in the undeviating paths marked out for it; it knows nothing, outside those paths”. (https://archive.org/stream/huntingwasps00fabr#page/210/mode/2up) (Fabre 1915: P211)
     Like our robot,it does not learn, its knowledge is static (does not change) hence it is not adaptive to its environment.  So if it designed nearly perfect way, robot will survive in many cases, until it confronts with situation which never taught by its designer. So it fails. In nature thanks to evolution we encounter with different kind of insects best fit to their environment. But probably not adapted to their environment.
     The aim for our robot project may be, being able to adapt the environment, like super organisms ie. ants, which adapts and even reshape its surroundings for its benefit.
      Creating Conscious Being
     So how to create more self aware – more conscious decission making process. If our aim was merely creating and artificial intelligence to pass turing test, creating an algoritm that make audience believe they are interacting with intelligent agent might be enough. However The inner mechanism of cognition matters and just by observing outputs does not prove that we have one conscious machine.
     ” Wright brothers and others stopped imitating birds and started using wind tunnels and learning about aerodynamics. Aeronautical engineering texts do not define the goal of their field as making “machines that fly so exactly like pigeons that they can fool even other pigeons.”(Artificial-Intelligence-A-Modern-Approach )
     Also I think it should not be mixed up with creating conscious being and creating humanlike thinking mechanism. Although our starting point naturally humanbeing, machines dont have hormones and bodies, which might makes us human. So another question to be answered; how will machine consciousness achieved.
 machineConsciousness.psd_ver0
     Another approach ;  
     Instead of one center, which receives inputs and gives output  according to predefined rules, we can try different mechanism; There will be many cores which have again predefined rules for evaluate its environment and gives output. But agent can run one action per run, Thus there will be one main core which receives all decision proposes and chose the one with highest priority. For choosing the most priority one , all cores’ demanded action should have two internal attributes,  importance  (importance of an action for core ) and weight(the priority between cores).
importance value will be decided according to how important this action for core itself. for instance, robots energy level is 25% and finding some lighted area for recharging solar cells is crucial. Lets say it is 0.75 importance. or in other case my battary is nearly full so importance is 0.0.
Weight value will be correlated between cores. So when one core’s weight increased the others has to decrease . The weight might be decided according to environmental situation. For instance if we have two decision cores , one of the “survival” the other one “fun”, and we have balanced environment with play -work , weights will be balanced also. But when environment becomes hostile (winter comes little light sun energy for charging batteries  survival cores weight will be increased. Then summer comes little worry about charging batteries more weight for play core.
Main core will choose decision according to comparing scores of each core. Scores formula is (weight X importance) the highest scored cores decision , action is taken and executed. after each decission according to success or failure, system will re-weight all cores importance. So the system will be more and more adaptive to its environment.
      I just started looking at neurol networks , I guess we will take great help from it, In further design of decision making pipeline we might take great  help from it. so further reading on this topic…
     Designing cores:
     In order to design cores we will take some organism. Like maslow’s pyramid but in a simplified way; Amip may have just one core for mere survival. it search for food and reproduce, Cat have same survival core  plus  “fun and play” core, human may have both cores plus self-actualization, Having pyramid shape is just defines the priorities however having different environmental conditions all priorities may change order. Here will come adapting to environment.
     Help from Greek gods ;  I believe human culture created these gods by understanding and observing itself, later its culture. In other words, More stylized, exaggerated human attributes embodied in shape of gods.  One single human being carries all these gods in its consciousness in different levels, different weight and its decission making process is continues fight or harmony of these cores.
     Inside we all have Apollo’s reason and logical thinking, and Dionysus’ chaos and appeal to emotion and instincts. They are not opposites or rivals, although interlaced entities in our decision making process , In our  methode we will try to capture these entities. Now we will try to make the process backwards , we will try to achieve conscious being by combining these gods in a network of decision making process.
    Also we might take help from jung’s archetypes, but combining all archetypes one single entitiy. But its one of later discussions topic.
     where freewill stands?
     with this multi-core Technic, we might achieve instinctive system adapting to its environment. But where is freewill? Is not this system is just instinctive-mechanism just plus attribute of adapting its environment. Unfortunately existence of freewill is questionable by itself. In 80s there was an experiment, finds out that the timing of conscious decisions was consistently preceded by several hundred milliseconds of background preparatory brain activity. So all decisions was already made in unconscious level ,before we decide them. It was iritating idea and many experiments have proven this theory.
    However recent discoveries finds traces of conscious intervention to decision making process. Still decisions are taken in unconscious level but just in action time we have power to stop them or let it go.  this paper summarizes these experiments ( http://nymag.com/scienceofus/2016/02/a-neuroscience-finding-on-free-will.html)
    Adapting freewill to our mechanism, we should have one extra core, continuously observes decision making process , aweres its all cores and check ballance between them and intervene when times requires. This part not only will be source of freewill,  but also self awareness part. Also this part is our only conscious level, while all others are decission making mechanism in unconscious level. Further discussion on this topic is required. Especially mental ilnesses related to identdity clashes, may be nice starting point.
In order to sumUp, we will have 4 mechanism individually evaluate situation and come togather for making decission;
1.individual cores: evaluates situation, decides according to its benefit disregarding to other cores.
2.main Core : combine all seperate cores decisions, compare them according to their weight and choose highest rated one.
3.evaluation of mechanism: Memorizes taken decisions, evaluate decisions performance by checking robots changed situation and environments feedback. Change individual cores’ weights accordingly.(positive negative reinforcement, learning with experience)
4.control mechanisms: check balance of cores, intervenes when necessary, reshape weights(freewill and self-awareness)

mel script for correctiveBlendShapes

In facial rigging, corrective blend shapes can become very tedious and time consuming process.  Sometimes it take more effort to arrange and connect blend shapes than actually sculpting them. So wrote a script for standardizing and making process easier.

pose.jpg

Limitations :In current script target geometry should not have any  outputs. Still working by disconnecting  outputs before creating or editing shapes need. Also being able to create corrective for more than two target shapes would be better. Finally, for mid weights correctives will be applied in future version.

Demo:  from 4.25 to 5.20

 

How it works

1-select base geometry with one blendshape node
2-update UI
3-select first bsSource
4-select second bsSource
5-press create/edit button for pose correction,
6-press done

 

 

 

 

secondaryAnim script

Lately i had a project about growing plants in VR environment. With help of any “L system based software”  it is fairly easy task however platform was unity and unity can’t import changing vertex numbers. All animation had to be done joint or blendShape based. So update one of my old script, secondaryAnim

Script basically takes first selected object’s animation and transfer it to rest of hierarchy with, delay,conserve and damping inputs, Kind of fake dynamics.simple script but very helpful. Also can be used as animation randomization tool.

script Link: