从这周开始,我将会在每周一强化学习算法,参考的教材是 Sutton
的 Reinforcement\ Learning
。大概的进度是每周一个小节,估计两个月多一点就能结束了,有兴趣的小伙伴们可以跟着一起学一下。作为开篇,今天先分享一些比较基础的知识和历史背景啥的,最后尝试用强化学习训练一个井字棋游戏的agent
。今天分享的内容对应书本的第一章。
Reinforcement Learning
当思考人类学习的具体方法时,我们可能首先会想到通过与环境互动来学习。当婴儿挥动手臂或四处张望时,他与环境产生了多种多样的感觉联系。基于这种联系,人的记忆中会产生大量的因果关系、行动后果以及如何实现目标的信息。在我们的一生中,这种互动无疑是我们了解环境和我们自己的主要知识来源。无论我们是学习开车还是交谈,我们都能敏锐地感受到周围的环境如何对自己的行为做出反应,进而试图通过我们的行为来影响所发生的事情。从交互中学习是几乎所有学习和智力理论的基本思想。作者探索了一种从交互中学习的计算方法,不是直接进行类似于仿生学的理论化,而是探索理想化的学习情况并评估各种学习方法的有效性。这种探索的方法称为强化学习,与其他机器学习方法相比,它更侧重于从交互中进行目标导向的学习。
强化学习问题要解决的是使智能体学习“做什么”,具体而言就是如何根据当前环境和自身的状态决定接下来的动作以最大化从环境中获得的收益。
- 智能体当前所做的决定会影响环境和自身的状态,从而影响接下来发生的事情。
- 智能体不会被告知要采取哪些行动,而是必须通过尝试来发现哪些行动会产生最大的回报。
- 当前所采取的动作会影响多个时间步之后的自身和环境的状态,并由此影响后续所有从环境中获得的收益。
这就是强化学习问题最重要的三个特征!
任何适合解决此类问题的方法,我们都认为是强化学习方法。强化学习与监督学习不同,监督学习是从一组已经标记好的知识库中学习,每个示例都是对一种情况的描述以及智能体应对该情况采取的正确动作。这种学习的目的是让智能体推断或概括其响应,以便它可以泛化到训练集之外的情况。这是一种重要的学习方式,但仅凭它不足以从互动中学习。在交互问题中,获取每个状态下的最优的动作往往是不切实际的,我们往往需要进行多次尝试才能知道关于每个状态下的决策的优劣,这就意味着在交互问题中,我们需要从自身的经验中进行学习。强化学习也不同于无监督学习,后者通常被用来寻找隐藏在未标记数据集合中的结构。监督学习和无监督学习这两个术语似乎已经涵盖了所有的机器学习问题,但事实并非如此。人们可能倾向于将强化学习视为一种无监督学习的方法,因为它不依赖于标记好的训练数据,但需要明确的是强化学习试图最大化收益而不是试图找到隐藏的结构。
强化学习中出现的具有挑战性的问题之一是 explore
和 exploit
之间的权衡。为了获得更大的收益,强化学习智能体更倾向于它过去尝试过并发现能有效产生奖励的动作。但是要发现这样的动作,它需要尝试一些以前没有选择过的动作。智能体必须利用它已经掌握的指示来获得奖励,但它也必须进行一定程度的探索以便在未来做出更好的选择。
现代强化学习最令人兴奋的是它与其他工程和科学学科的实质性和富有成效的互动。例如:一些强化学习方法使用参数化逼近解决了运筹学和控制理论中经典的“维度诅咒”;强化学习还与心理学和神经科学产生了强烈的相互作用,双方都受益匪浅。在所有机器学习形式中,强化学习是最接近人类和其他动物所做的那种学习,强化学习的许多核心算法最初都受到生物学习的启发。
Examples
通过一些现实生活中的例子和应用,可以更好地理解强化学习。
- 国际象棋玩家每一步的决策是通过:预想可能可能发生的情况以及对特定位置和移动的可取性的直观判断来决定的。
- 自适应控制器实时调整炼油厂的运行参数,控制器按照给定的代价函数优化产量-成本-质量之间的权衡,而不是严格遵守工程师最初建议的设定参数。
- 一只瞪羚在出生后几分钟就开始不断尝试进行站立,半小时后它就能够以每小时 20 英里的速度奔跑了。
- 扫地机器人决定:是否应该进入一个新房间搜集更多的垃圾,或者电量不足开始返回。 它根据当前电池的电量以及过去找到充电器的经验做出决定。
Phil
准备他的早餐这件事。即使是这种看似平凡的活动也揭示了一个由条件行为和相互关联的目标子目标组成的复杂网络:走到橱柜前,打开它,选择一个谷物盒,然后伸手去拿,抓住并取回盒子。要获得碗、勺子和牛奶壶,还需要其他复杂的、经过调整的、交互的行为序列。每个步骤都涉及一系列的眼部的运动,以获取信息并指导身体各个部位的移动。不断地快速判断如何携带这些物品,或者在获得其他物品之前是否最好将其中一些运送到餐桌上。在这个过程中,每一步都以目标为导向,并为其他目标服务,例如准备好谷类食品后用勺子吃饭并最终获得营养。无论他是否意识到这一点,Phil
都在获取有关他身体状态的信息,正是这些信息决定了他的营养需求、饥饿程度和食物偏好,再加上周边环境的信息就决定了他在这个事件序列中做出的决策。
一个显著的特征是,这些例子都涉及主动决策的智能体与其环境之间的交互,尽管是环境不确定的,智能体仍然试图在其中实现某个目标。智能体的行为会影响环境的未来状态(例如,下一次的下棋位置、炼油厂的水库水位、机器人的下一个位置及其未来的可能电量),从而影响智能体自身未来可能拥有的选项。当然,正确的选择需要考虑动作的延迟效果,因此可能需要一些长远的规划。同时,在这些例子中,动作的效果都不能被完全预测,因此智能体必须实时地监控环境并做出适当的反应。 例如,Phil
必须时刻注意他倒入碗中的牛奶,以防止牛奶溢出。 所有这些示例都涉及明确的目标,比如:棋手知道他是否获胜,炼油厂控制器知道生产了多少石油等。
Elements of Reinforcement Learning
除了智能体和环境之外,强化学习的四个主要元素包括:policy
、reward
、value\ function
以及可选的环境模型。
policy
策略定义了智能体在每个状态下的决策。 决策是从感知的环境状态到在这些状态下要采取的行动的映射。在某些情况下,策略可能是一个简单的函数或者映射表,而在其他情况下,它可能涉及大量计算,例如搜索。策略是强化学习智能体的核心,因为它本身就足以确定智能体的决策。
reward
奖励信号定义了强化学习问题的目标。在每个时间步,环境都会向强化学习智能体发送一个数字,即奖励。智能体的目标就是最大化它在长期获得的总奖励值。因此,奖励信号定义了好事件和坏事件。在生物系统中,我们可能会认为奖励类似于快乐或痛苦的体验。发送给智能体的奖励取决于当前的动作和环境的当前状态,智能体影响奖励信号的唯一方法是通过他的每一次决策。在上面关于 Phil
吃早餐的例子中,指导他行为的强化学习智能体可能会在他吃早餐时收到不同的奖励信号,这取决于他的饥饿程度、心情等特征。奖励信号是改变策略的主要依据,如果该策略选择的一个动作收获了较低的奖励,那么该策略可能会被更改以选择一些收益较大的动作。
value\ function
值函数从最大化长远的收益来影响当前的决策,这是与 reward
机制最大的不同。例如,在当前状态下采取动作 a_1
或许可以带来很大的 reward
,但是会进入一个比较差的状态,导致后面都不会有什么收益。值函数正是为了解决这样的问题而存在的,它指明了从长远来看,什么状态才是一个好的状态。
可选的环境模型决定了当我们的智能体做出决策时,环境会发生什么样的变化。以下棋为例,不同的对手事实上构成了不同的环境,跟一个随机器下棋与跟一个基于贪心策略下棋收获的效果是截然不同的。
总的来说,在确定了环境模型之后,reward
与 value\ function
协同优化 policy
。
Tic-Tac-Toe
现在,我们尝试训练一个会下 “井字棋” 的强化学习智能体,注意我这里训练的是一个先手智能体。“井字棋” 游戏是有必胜策略的,看看我们的智能体是否可以自己将这个必胜策略训练出来。

具体实现的视角有多种,比如可以从Agent的视角进行设计,也可以将Agent作为环境的一部分,这里采取的是后者。
定义环境
这里的环境事实上就是井字棋的棋盘,在对弈时两个玩家交替决策。
class TikTacToe():
def __init__(self, p1: Player, p2: Player) -> None:
self.state = [B] * 9
self.p1 = p1
self.p2 = p2
return
def run(self):
# 两个玩家交替执棋,p1是我们要训练的强化学习智能体,先手下棋
while True:
action_p1, new_state = self.p1.take_action(self.state.copy())
self.state = new_state
if self.is_termination():
self.p1.informed_win(self.state)
self.p2.informed_lose(self.state)
break
if self.no_winner():
self.p1.informed_draw(self.state)
self.p2.informed_draw(self.state)
break
action_p2, new_state = self.p2.take_action(self.state.copy())
if not MODE == MATCH:
self.p1.update_value_function(self.state, new_state)
self.state = new_state
if self.is_termination():
self.p2.informed_win(self.state)
self.p1.informed_lose(self.state)
break
return
def is_termination(self,):
# 判断当前状态是不是终止状态,如果三个一样的棋子连成一条线,则为终止状态
return
def no_winner(self,):
# 判断是否平局,如果棋子已经下满但是没有出现三点一线,则为平局
return
def visualize_board(self, message):
# 对棋盘进行可视化
return
可选的环境模型
这里主要是实现拥有不同策略的对手,比如:一个采取随机策略的对手、电脑前的你。
class RandomPlayer(Player):
def __init__(self, label=X) -> None:
self.label = label
return
def take_action(self, state):
'''
take action randomly
return current action and the next state
'''
pos = rd.randint(0, 8)
while state[pos] in [X, O]:
pos = rd.randint(0, 8)
state[pos] = self.label
return pos, state.copy()
class HumanPlayer(Player):
def __init__(self, label) -> None:
self.label = label
return
def take_action(self, state: List[int]):
pos = int(input('Please input a position: '))
while state[pos] in [X, O]:
pos = int(input('Occupied! Input again: '))
state[pos] = self.label
return pos, state
强化学习智能体
当尝试实现强化学习智能体时,需要对 reward
和 value\ function
进行一些设定。我规定,当Agent获胜时,可以得到 r = 1
,平局时 r = 0.5
,失败时 r=-1
;value\ function
的更新策略为 V(s) = V(s) + \alpha[V(s')-V(s)]
,这意味着如果次状态 s'
是一个比当前状态 s
更好的状态,那么当前状态的值函数将会被提升。考虑到除了终止状态,其他状态的 reward
都是 0,所以这里和动态规划比较相像,值函数将会从终止状态开始被一层层地向前更新。
class ReinfrocementLearningPlayer(Player):
def __init__(self, label=X, value_function_path='q_table.pkl') -> None:
self.label = label
self.value_function_path = value_function_path
try:
with open(value_function_path, 'rb') as f:
self.value_function = pickle.load(f)
except:
self.value_function = {}
return
def set_env(self, game_env: TikTacToe):
self.env = game_env
self.value_function[self._hash(self.env.state)] = 0
return
def take_action(self, state):
# get successors
candidates = self._get_successors(state)
candidates_value_function = [
self.value_function[self._hash(a_s[1])] for a_s in candidates]
# policy
if MODE == DEBUG:
print(candidates)
print(candidates_value_function)
if not MODE == TRAIN:
exploit_rate = 1
else:
exploit_rate = 0.8
print(candidates)
print(candidates_value_function)
exploit_rate = 1
if rd.random() < exploit_rate:
action, next_state = candidates[candidates_value_function.index(
max(candidates_value_function))]
else:
action, next_state = candidates[rd.randint(0, len(candidates)-1)]
# update value function
if not MODE == MATCH:
self.update_value_function(state, next_state)
return action, next_state
def update_value_function(self, state, next_state):
if not self.value_function.__contains__(self._hash(state)):
self.value_function[self._hash(state)] = 0
if not self.value_function.__contains__(self._hash(next_state)):
self.value_function[self._hash(next_state)] = 0
gamma = 0.2
self.value_function[self._hash(
state)] = self.value_function[self._hash(state)] + gamma * (self.value_function[self._hash(next_state)] - self.value_function[self._hash(state)])
return
def informed_win(self, final_state):
self.value_function[self._hash(final_state)] = 1
self.ending()
return
def informed_lose(self, final_state):
self.value_function[self._hash(final_state)] = -1
self.ending()
return
def informed_draw(self, final_state):
self.value_function[self._hash(final_state)] = 0.5
self.ending()
return
def ending(self):
with open(self.value_function_path, 'wb') as f:
pickle.dump(self.value_function, f)
with open('q_table.json', 'w') as f:
json.dump(self.value_function, f)
return
def _get_successors(self, state: List):
if not self.value_function.__contains__(self._hash(state)):
self.value_function[self._hash(state)] = 0
ret = []
for i in range(9):
if state[i] == B:
tmp = state.copy()
tmp[i] = self.label
ret.append((i, tmp))
if not self.value_function.__contains__(self._hash(tmp)):
self.value_function[self._hash(tmp)] = 0
return ret
def _hash(self, state: List):
return ','.join(state)
完整的实现放在了 git@github.com:21S003018/Tik-Tak-Toe.git
.
用随机智能体训练
在开始训练之前,我们先来看看智能体在初始状态下的胜率是多少,以100次对弈进行统计,得到结果
{'win': 67, 'draw': 12, 'lose': 21}
可以看到胜率大概是67\%
,有12次平局,21次输掉对局。这说明井字棋游戏的先手优势还是很显著的。
下面,我分别给出每训练1000次Agent的胜率,观察Agent的变化
1000:
{'win': 75, 'draw': 21, 'lose': 4}
2000:
{'win': 79, 'draw': 20, 'lose': 1}
4000:
{'win': 87, 'draw': 12, 'lose': 1}
...
10000:
{'win': 97, 'draw': 3, 'lose': 0}
值得提醒的是,Agent的训练模型中有一个参数 \epsilon
用来控制 \epsilon
-贪心策略,在测试时要设置为1,意思是在测试时不进行 explore
只进行 exploit
,否则测出来的胜率永远都是 \epsilon
。
可以看到,在训练了10000次之后,Agent基本上已经可以完全打败随机智能体了,但是这仅限于随机智能体,如果换成一个采取贪心策略的智能体,Agent的胜率马上就下来了。事实上Agent的能力去决定于对手,对手能力越强,训练得到的Agent的能力也就越强。
总结
后面,我将会以井字棋游戏为例尝试各种强化学习算法以及各种更强劲的智能体作为对手,希望能从简单的例子理解每个强化学习算法的思想内涵,下周一是 Multi
-arm\ Bandits
,敬请期待~
Correct vitamin and adequate protein consumption are
essential for maximizing the benefits of Anavar. Ensure you observe a balanced food plan that
helps your health targets. Equally, if you’re an older adult
or have an underlying medical situation, you must consult together with your physician before beginning
Anavar. Your doctor could suggest a lower dosage or advise you
towards utilizing Anavar altogether. While Oxandrolone is sometimes used
by female, the PCT course of for ladies could differ because of the
differences in hormone regulation. Women ought to consult with a certified healthcare service for proper PCT.
Being used solo or in a stack, Anavar will certainly provide the desired outcomes.
It doesn’t come with any of the nasty side effects as a end result of
it’s created from pure components. If you wish to minimize
fats and get leaner, you can stack Anvarol with CrazyBulk’s slicing supplements, such as Clenbutrol and Winsol.
You also can stack Anavar with steroids like Winstrol, Clen, and Trenbolone for cutting functions.
If you may be using Anavar for cutting purposes, you will need to make use of it for a shorter period of time than in case
you are using it for bulking functions. As you’ll find a way to see, the really helpful Anavar dosages for women and men are fairly
totally different. This is because males are likely to tolerate the drug a lot better than ladies
do. You can abuse Anavar steroid either by taking a higher than recommended day by day dosage or operating an extended than deliberate cycle.
In case of stacking, ladies can take into consideration teaming up Anavar with other
delicate steroids, maintaining the dosage low. This method ensures harmony between compounds and maximizes the potential
of their bodybuilding quest. Anavar is a mild anabolic steroid and one of many
most secure steroids; that is why Anavar for girls is extensively in style within the bodybuilding world.
It is used to lower physique fats and doesn’t cause any severe side effects.
This guide helps to run an efficient Anavar cycle to maximise its
outcomes.
This is due to copious scams where the label states 40 mg of Anavar,
but in reality, it is just 20 mg. This is a typical state of affairs where the vendor has cut the dose in half.
Thus, the above dosage recommendations are based on the Valley website taking
genuine Anavar. Anavar is a DHT-derived steroid; thus, accelerated hair
loss can be experienced in genetically prone people.
However, Anavar is exclusive on this respect,
being largely metabolized by the kidneys.
As you may know, ATP (adenosine triphosphate) is the vitality supply for your muscle tissue.
Anvarol will increase your ATP ranges, giving you extra vitality and making your exercises simpler.
The majority of those results are brought on if you abuse, take excess dosage or
have an underlying/hidden medical situation.
When you employ anabolic steroids, your body’s natural hormone manufacturing can get disrupted.
PCT helps restore that natural stability and avoids many unpleasant side effects.
A typical PCT protocol would possibly involve using
certain drugs like Clomid or Nolvadex, which assist stimulate your physique to provide its personal testosterone again. It’s extremely essential to start PCT as quickly as you end your
cycle. Not doing so may cause several points like lack of muscle features, fatigue,
low sex drive, and temper swings. We can provide assist and guidance
during PCT, and we are in a position to also help you navigate
a plan for the lengthy run. We are here for you with Digital IOP, so that you
don’t need to do it alone.
Individuals with existing high blood pressure or those genetically vulnerable
to coronary heart disease should not take Anavar or different steroids due to unfavorable redistribution of cholesterol levels.
Anavar and all anabolic steroids are primarily types
of exogenous testosterone; thus, Anavar will improve muscle mass.
Sure, you possibly can take 50 mg of Anavar a day, nevertheless it’s essential to notice that this must be
accomplished underneath the supervision of a healthcare skilled or a
licensed steroid skilled. This dosage is often extra appropriate for males and skilled steroid customers.
Bear In Mind, excessive doses of Anavar can lead to antagonistic
effects, so it is important to find the right steadiness that fits your physique and objectives.
Penalties for illegal possession or distribution can range depending on the amount and
the precise circumstances, starting from fines to potential imprisonment.
If unwanted effects become extreme or regarding, discontinue use immediately and consult a medical skilled.
Proper analysis, precautions, and cycle planning are important for a secure Anavar use in bodybuilding.
The best means to use Anavar is to start with lower
dosages, and to increase over the course of 8
weeks, where men should be beginning with 20mg per day, and women from 2.5mg
per day.
If you expertise any severe side effects while taking Anavar, you must cease
taking the medicine and talk to your healthcare provider immediately.
This medication is normally taken orally, although it can be
injected. The usual beginning dose is 10 mg per day for males and 5 mg per day for women.
As you progress and gauge your body’s response, you can progressively increase the
dosage inside the beneficial vary. Remember, it is crucial to seek the guidance of with
a healthcare professional or skilled coach before beginning any
Anavar regimen. They can assess your individual circumstances,
present personalized steering, and assist decide the optimal dosage primarily based on these components.
These are all derivatives of dihydrotestosterone with a relatively low
threat of aromatization (conversion to estrogen) in comparability with testosterone-based steroids.
However, they still carry dangers of side effects like liver toxicity, pimples,
hair loss and cardiovascular strain.
dianabol injection cycle
References:
test and dianabol cycle
before and after hgh
References:
hgh x2 (qrscopy.Com)
the best legal steroids on the market
References:
What steroids should i take to get ripped
are steroids legal to buy online
References:
heavy r illegal site