分享

hadoop1 与hadoop2 fair-schduler调度器 配置和使用

tntzbzc 发表于 2015-8-28 20:12:55 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 0 8087
本帖最后由 tntzbzc 于 2015-8-28 20:15 编辑



hadoop1
  • 配置 mapred-site.xml,增加如下内容


[mw_shl_code=bash,true] <property>
        <name>mapred.jobtracker.taskScheduler</name>
        <value>org.apache.hadoop.mapred.FairScheduler</value>
    </property>
    <property>
        <name>mapred.fairscheduler.allocation.file</name>
        <value>/etc/hadoop/conf/pools.xml</value>
    </property>[/mw_shl_code]

配置 pools.xml,增加如下内容

[mw_shl_code=xml,true]<queue name="default”>
  <minResources>1024 mb,1vcores</minResources>
  <maxResources>61440 mb,20vcores</maxResources>
  <maxRunningApps>10</maxRunningApps>
  <weight>2.0</weight>
  <schedulingPolicy>fair</schedulingPolicy>
</queue>

<queue name=“hadoop”>
  <minResources>1024 mb,10vcores</minResources>
  <maxResources>3072000 mb,960vcores</maxResources>
  <maxRunningApps>60</maxRunningApps>
  <weight>5.0</weight>
  <schedulingPolicy>fair</schedulingPolicy>
  <aclSubmitApps>hadoop,yarn,spark</aclSubmitApps>
</queue>

<queue name="spark">
  <minResources>1024 mb,10vcores</minResources>
  <maxResources>61440 mb,20vcores</maxResources>
  <maxRunningApps>10</maxRunningApps>
  <weight>4.0</weight>
  <schedulingPolicy>fair</schedulingPolicy>
<aclSubmitApps>yarn,spark</aclSubmitApps>
</queue>

<userMaxAppsDefault>20</userMaxAppsDefault>[/mw_shl_code]

提交作业指定队列方式
[mw_shl_code=bash,true]-Dmapred.job.queue.name=hadoop[/mw_shl_code]

hadoop2
  • 配置 yarn-site.xml,增加如下内容


[mw_shl_code=xml,true]<property>
  <name>yarn.resourcemanager.scheduler.class</name>
                 <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
</property>

<property>
   <name>yarn.scheduler.fair.allocation.file</name>
   <value>/home/cluster/conf/hadoop/fair-scheduler.xml</value>
</property>

<property>  
  <name>yarn.scheduler.fair.user-as-default-queue</name>
  //如果希望以用户名作为队列,可以将该属性配置为true,默认为true,所以如果不想以用户名为队列的,必须显式的设置成false  
  <value>false</value>  
</property> [/mw_shl_code]

配置 fair-scheduler.xml,增加如下内容

[mw_shl_code=xml,true]<queue name="default”>
  <minResources>1024 mb,1vcores</minResources>
  <maxResources>61440 mb,20vcores</maxResources>
  <maxRunningApps>10</maxRunningApps>
  <weight>2.0</weight>
  <schedulingPolicy>fair</schedulingPolicy>
</queue>

<queue name=“hadoop”>
  <minResources>1024 mb,10vcores</minResources>
  <maxResources>3072000 mb,960vcores</maxResources>
  <maxRunningApps>60</maxRunningApps>
  <weight>5.0</weight>
  <schedulingPolicy>fair</schedulingPolicy>
  <aclSubmitApps>hadoop,yarn,spark</aclSubmitApps>
</queue>

<queue name="spark">
  <minResources>1024 mb,10vcores</minResources>
  <maxResources>61440 mb,20vcores</maxResources>
  <maxRunningApps>10</maxRunningApps>
  <weight>4.0</weight>
  <schedulingPolicy>fair</schedulingPolicy>
<aclSubmitApps>yarn,spark</aclSubmitApps>
</queue>

<userMaxAppsDefault>20</userMaxAppsDefault>[/mw_shl_code]

提交作业指定队列方式
[mw_shl_code=bash,true] -Dmapreduce.job.queuename=root.hadoop[/mw_shl_code]


spark
  • 提交作业指定队列方式

[mw_shl_code=bash,true]--queue=root.spark[/mw_shl_code]






没找到任何评论,期待你打破沉寂

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条