Web Crawler or Scraper

Citation

Abstract

介於 continuous optimization (back-propagation with gradient descent) and discrete optimization (Evolution Algorithm, Reinforcement learning Bellman-xxx equation, dynamic programming), 是否有其他的 algorithms?

How about learn from quantum mechanics, uncertain (mixed) multi-states?

Quantum mechanics multi-state 先決條件是: (1) state 之間是 orthogonal (independent); (2) total states 是 complete (no missing information).

我們還是可以有 quasi-multi-state optimization. (1) 盡量 minimize state independency (KPI?) (2) 盡量 complete information (or ignore some unimportant states).

How? (1) one extreme : 很接近 continuous 是 perturbation (?) back-propagation with gradient descent; (2) the other extreme (fully discrete), maximum entropy? or use variational method (integration) to replace back-propagation (differentiation)?