Vipshopsaturnexecutor没有分⽚,JOB执⾏过程报executorimpty
Vipshop saturn 的开源⽂档中的错误还是不少,executor 没有分⽚还是由于⽂档说明不清楚,配置错误导致。
在按demo⽂件启动executor 及demo的job,发现job ⼀直没有执⾏,跟踪代码分析发现:
在类public abstract class AbstractElasticJob 中 的public final void execute(final Triggered triggered) 中的如下代码就返回了
if (ShardingItems() == null || ShardingItems().isEmpty()) {
LogUtils.debug(log, jobName, "{} 's items of the executor is empty, do nothing about business.",
uid
jobName);
callbackWhenShardingItemIsEmpty(shardingContext);
return;
}
哈里波特全集
在如上代码说executor 是empty的,导致没有执⾏,再跟踪分析发现是在类public class ExecutionContextService的
public List<Integer> getShardingItems() {英文报纸
List<Integer> shardingItems = ShardingService().getLocalHostShardingItems();
boolean isEnabledReport = configService.isEnabledReport();
if (configService.isFailover() && isEnabledReport) {
在线英语补习List<Integer> failoverItems = LocalHostFailoverItems();
fingif (!failoverItems.isEmpty()) {
return failoverItems;
} el {
return shardingItems;
}
} el {
return shardingItems;
}
}
的第⼀⾏的getLocalHostShardingItems就返回了空,getLocalHostShardingItems实际获取zookeeper的信息
public static String getShardingNode(final String executorName) {
return String.format(SERVER_SHARDING, executorName);
}
同时发现出现问题时候console 中dashboard的executor 数量也始终为0,于是转头去分析console 的配置,猜想是否是
CONSOLE_ZK_CLUSTER_MAPPING配置有问题,初始配置是按⽂档说明配置为:default:/192.168.157.130,其
中"192.168.157.130"是zk cluster name,分析代码发现关键类:public class RegistryCenterServiceImpl,判断是否console 是否管理zookeeper信息代码如下:
/**
* 判断该集群是否能被本计算
*/
private boolean isZKClusterCanBeComputed(String clusterKey) {
尸蜡
if (CollectionUtils.isEmpty(restrictComputeZkClusterKeys)) {
return fal
英文翻译器在线翻译
}
ains(clusterKey);
}
配置信息导⼊代码如下:
private void refreshRestrictComputeZkClusters() throws SaturnJobConsoleException {
// clear 当前可计算的zkCluster集群列表
restrictComputeZkClusterKeys.clear();
String allMappingStr = ValueDirectly(SystemConfigProperties.CONSOLE_ZK_CLUSTER_MAPPING);
if (StringUtils.isBlank(allMappingStr)) {
log.info(
"CONSOLE_ZK_CLUSTER_MAPPING is not configured in sys_config, so all zk clusters can be computed by this console");
restrictComputeZkClusterKeys.addAll(getZkClusterKeys());
slang
return;
}
allMappingStr = StringUtils.deleteWhitespace(allMappingStr);
String[] singleConsoleMappingArray = allMappingStr.split(";");
for (String singleConsoleMappingStr : singleConsoleMappingArray) {
String[] consoleAndClusterKeyArray = singleConsoleMappingStr.split(":");
if (consoleAndClusterKeyArray.length != 2) {
throw new SaturnJobConsoleException(
"the CONSOLE_ZK_CLUSTER_MAPPING(" + String(consoleAndClusterKeyArray)
+ ") format is not correct, should be like console_cluster_id:zk_cluster_id");
}
String tempConsoleClusterId = consoleAndClusterKeyArray[0];
String zkClusterKeyStr = consoleAndClusterKeyArray[1];
christmas treeif (consoleClusterId.equals(tempConsoleClusterId)) {
String[] zkClusterKeyArray = im().split(",");
restrictComputeZkClusterKeys.addAll(Arrays.asList(zkClusterKeyArray));
log.info("the current console cluster:{} can do sharding and dashboard to zk clusters:{}",
consoleClusterId, restrictComputeZkClusterKeys);
return;
}
}
}
发现没有处理"/"的代码(唯品会内部zk cluster命名都是/开头?),将配置修改为default:192.168.157.130,再次跟踪代码,发现有⼀段关键代码也顺利执⾏了:
public class RegistryCenterServiceImpl 的private void createNamespaceShardingManager()
然后job也被正确执⾏。
截图庆祝⼀下: