Code Monkey home page Code Monkey logo

blog's People

Contributors

tfdream avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

blog's Issues

投资满兑活动技术方案设计

场景

活动期间,用户投资指定理财产品计划每满XX元(例如每满1000元),兑换积分加1,用户可以使用兑换积分去兑换礼品。

1.投资得兑换积分

注:需要把不足1000的投资记录下来,比如两次分别投资1500元,第一次得1个兑换积分,第二次得2个兑换积分,一共3个兑换积分。

2.兑换积分换礼品

实物、虚拟卡、红包 多种类型的礼品,全都平铺在页面上。实物和虚拟卡礼品有总数限制
已兑完的礼品,按钮为“抢光啦”,不可点击。

设计思路

计算用户兑换积分

用户兑换礼品

兑换礼品主逻辑如下:

/**
 * @author Ricky Fung
 */
@Service
public class GiftExchangeService {
    private final Logger logger = LoggerFactory.getLogger(this.getClass());

    @Resource(name = "giftService")
    private GiftService giftService;

    @Resource(name = "userService")
    private UserService userService;

    @Resource(name = "userPointService")
    private UserPointService userPointService;

    /**
     * 提交兑换礼品请求
     * @param activityId 活动id
     * @param userId 用户id
     * @param giftId 活动礼品id
     * @param quantity 兑换数量
     * @return
     */
    public ApiResult<Boolean> submit(Integer activityId, Long userId, Long giftId, int quantity) {
        logger.info("活动-兑换礼品, 用户userId:{}, activityId:{}, giftId:{}, quantity:{} 请求处理开始",
                userId, activityId, giftId, quantity);
        //1.校验参数
        if (giftId==null || giftId.longValue() <1) {
            return ApiResult.buildInvalidParameterResult("giftId必须大于0");
        }
        if (quantity < 1) {
            return ApiResult.buildInvalidParameterResult("quantity必须大于0");
        }

        //2.查询用户兑换的礼品信息
        ActivityGift activityGift = giftService.getGiftById(activityId, giftId);
        if (activityGift==null) {
            return ApiResult.buildFailureResult(1001, "礼品id不存在");
        }

        //3.用户信息
        User user = userService.getUserById(userId);
        if (user==null) {
            return ApiResult.buildFailureResult(1002, "用户不存在");
        }

        //4.1 校验库存数量
        int count = giftService.getGiftStock(giftId);
        if (count<1) {
            return ApiResult.buildFailureResult(1003, "礼品库存不足");
        }

        //加分布式锁
        String lockKey = String.format("%s:%s", GiftConstant.GIFT_EXCHANGE_USER_LOCK_KEY, userId);
        DistributedLock lock = distributedLockClient.getLock(lockKey);
        try {
            boolean success = lock.tryLock(0, 30, TimeUnit.SECONDS);
            if (!success) {
                logger.info("活动-兑换礼品, 用户userId:{} activityId:{}, giftId:{} 用户加锁失败",
                        userId, activityId, giftId);
                return ApiResult.buildFailureResult(1004, "请勿重复操作");
            }
            //加锁成功
            try {
                //4.2 可用积分校验
                UserPoint userPoint = userPointService.getUserPoint(userId, activityId);
                if (userPoint==null) {
                    return ApiResult.buildFailureResult(1005, "对不起,您的兑换卡数量不足");
                }
                int requiredPoint = activityGift.getPointValue() * quantity;
                if (userPoint.getAvailablePoint().intValue() < requiredPoint) {
                    logger.info("活动-兑换礼品, 用户userId:{} activityId:{} 当前可用积分:{} 不满足兑换要求:{}",
                            userId, activityId, userPoint.getAvailablePoint(), requiredPoint);
                    return ApiResult.buildFailureResult(1005, "对不起,您的可用积分不足");
                }

                //5.1 减库存
                boolean updateSuccess = giftService.decreaseGiftStock(giftId, quantity);
                if (!updateSuccess) {
                    logger.info("活动-兑换礼品, 用户userId:{} activityId:{}, giftId:{} 更新礼品库存失败",
                            userId, activityId, giftId);
                    return ApiResult.buildFailureResult(1003, "礼品库存不足");
                }
                //5.2 创建用户活动礼品记录
                ActivityUserGift record = giftService.createOrder(activityId, userId, activityGift, quantity);
                //5.3 减用户可用积分并保存兑换记录
                ApiResult<Long> result = giftService.saveUserGift(userPoint, requiredPoint, record);
                if (result.isSuccess()) {
                    Long id = result.getData();
                    logger.info("活动-兑换礼品, 用户userId:{} activityId:{}, giftId:{}, quantity:{} 兑换成功id:{}",
                            userId, activityId, giftId, quantity, id);

                    //发放活动奖品
                    sendActivityGift(userId, activityId, id);
                    return ApiResult.buildSuccessResult(Boolean.TRUE);
                }
            } finally {
                //释放锁
                lock.unlock();
            }
        } catch (InterruptedException e) {
            logger.error(String.format("活动-兑换礼品, 用户userId:%s, activityId:%s, giftId:%s, quantity:%s 加锁异常",
                    userId, activityId, giftId, quantity));
        } catch (UserPointShortageException e) {
            logger.error("活动-兑换礼品, 用户userId:{} activityId:{}, giftId:{}, quantity:{} 兑换失败-更新用户可用积分失败",
                    userId, activityId, giftId, quantity);
            return ApiResult.buildFailureResult(1005, "对不起,您的兑换卡数量不足");
        } catch (Exception e) {
            logger.error(String.format("活动-兑换礼品, 用户userId:%s activityId:%s, giftId:%s, quantity:%s 兑换异常",
                    userId, activityId, giftId, quantity), e);
        }
        return ApiResult.buildSystemErrorResult();
    }

    private void sendActivityGift(Long userId, Integer activityId, Long id) {
        //发送 发放活动礼品mq消息

    }
}

GiftService 库存相关代码如下:

/**
 * @author Ricky Fung
 */
@Service
public class GiftService {
    private final Logger logger = LoggerFactory.getLogger(this.getClass());

    @Autowired
    private StringRedisTemplate stringRedisTemplate;

    private final DefaultRedisScript<Long> stockScript;

    public GiftService() {
        this.stockScript = new DefaultRedisScript<>();
        stockScript.setScriptSource(new ResourceScriptSource(new ClassPathResource("scripts/stock_decr.lua")));
        stockScript.setResultType(Long.class);
    }

    /**
     * 获取活动礼品库存数量
     * @param giftId
     * @return
     */
    public int getGiftStock(Long giftId) {
        String key = getActivityAwardStockKey(giftId);
        String str = stringRedisTemplate.opsForValue().get(key);
        int count = 0;
        if (StringUtils.isNotEmpty(str)) {
            count = Integer.parseInt(str);
            if (count < 0) {
                count = 0;
            }
        }
        return count;
    }

    /**
     * 扣减活动礼品库存
     * @param giftId
     * @param quantity
     * @return
     */
    public boolean decreaseGiftStock(Long giftId, int quantity) {
        String key = getActivityAwardStockKey(giftId);
        //LUA脚本保证原子性
        Long update = stringRedisTemplate.execute(stockScript, Collections.singletonList(key), String.valueOf(quantity));
        logger.info("活动-兑换礼品, giftId:{} 减库存quantity:{} 后的结果:{}",
                giftId, quantity, update);
        return update >= 0;
    }

    private String getActivityAwardStockKey(Long giftId) {
        return String.format("%s:%s", "stock", giftId);
    }

}

GiftService 扣减用户可用积分代码如下:

    @Transactional(rollbackFor = Exception.class)
    public ApiResult<Long> saveUserGift(UserPoint userPoint, int requiredPoint, ActivityUserGift record) {
        Long id = userPoint.getId();
        //1.扣减用户可用积分
        int update = userPointMapper.updateUserAvailablePoint(id, requiredPoint);
        if (update<1) {
            throw new UserPointShortageException("可用积分不足");
        }
        //2.插入兑换礼品记录
        activityUserGiftMapper.insert(record);
        return ApiResult.buildSuccessResult(record.getId().longValue());
    }

    public ActivityGift getGiftById(Integer activityId, Long giftId) {
        return activityGiftMapper.findGiftById(giftId);
    }

    public ActivityUserGift createOrder(Integer activityId, Long userId, ActivityGift activityGift, int quantity) {
        ActivityUserGift userGift = new ActivityUserGift();
        //赋值
        return userGift;
    }

其中,扣减用户可用积分SQL如下:

  <update id="updateUserAvailablePoint">
    update activity_user_point
    set available_point = available_point - #{requiredPoint}
	where id=#{id} and available_point >=#{requiredPoint};
  </update>

activity_user_point表DDL如下:

CREATE TABLE `activity_user_point` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键',
`activity_id` bigint(20) NOT NULL COMMENT '活动id',
`user_id` bigint(20) unsigned NOT NULL COMMENT '用户id',
`name` varchar(45) NOT NULL COMMENT '用户真实姓名',
`total_point` bigint(20) NOT NULL DEFAULT '0' COMMENT '用户获得总积分数',
`available_point` bigint(20) unsigned NOT NULL DEFAULT '0' COMMENT '用户可用的积分数',
`version` int(10) NOT NULL DEFAULT '1' COMMENT '版本号',
`create_time` datetime DEFAULT NULL COMMENT '创建时间',
`update_time` datetime DEFAULT NULL COMMENT '更新时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_user_id` (`user_id`,`activity_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='动用户积分表';

其中,available_point为 bigint(20) unsigned 可以在数据库层面保证 available_point小于0情况发生。

附录 - Lua脚本

1、investment_point.lua

local investment_key = KEYS[1]
local point_key = KEYS[2]

local investmentAmount = tonumber(ARGV[1])
local pointUnit = tonumber(ARGV[2])

local totalInvestmentAmount = tonumber(redis.call("INCRBY", investment_key, investmentAmount))
local totalPoints = math.floor(totalInvestmentAmount / pointUnit);

redis.call("SET", point_key, totalPoints)

return {totalInvestmentAmount, totalPoints}

  1. stock_decr.lua
local stocks_key = KEYS[1]
local acquiredNum = tonumber(ARGV[1])

local current = tonumber(redis.call("GET", stocks_key))
if current == nil then
  current = 0
end
local success = -1;
if current >= acquiredNum then
    success = tonumber(redis.call("DECRBY", stocks_key, acquiredNum))
end

return success

抽奖码生成算法

业务场景

线上经常举办一些活动,每个用户可以通过做任务获得抽奖码,然后某天定时开奖。

思路

抽奖码要求:

  • 每个抽奖码都是全局唯一的,不能重复
  • 一个用户可以获得1个或多个 抽奖码
  • 抽奖码尽可能短

用户userId肯定是全局唯一的,我们可以生成一个全局唯一的sequence(可以依赖发号器或者数据库自增id),然后把二者结合起来形成一个新的全局唯一 64bit id。

例如,64bit 中 sequence占低32位(可支持40亿个抽奖码),userId占28位(可支持2亿用户),高4位预留(默认为0),格式如下:

| -- 4bit-- | ---------- 28bit ---------- | ------------ 32bit ------------ |

核心算法

LuckyCodeService.java

package io.mindflow.architect.luckcode.service;

import org.springframework.stereotype.Service;

/**
 * 生成抽奖码算法
 * @author Ricky Fung
 */
@Service
public class LuckyCodeService {
    private static final int DICT_BIT_LENGTH = 5;
    private static final int DICT_MASK = (1 << DICT_BIT_LENGTH) - 1;
    private static final int DICT_LENGTH = (1 << DICT_BIT_LENGTH);

    private static final int CODE_LENGTH = 13;

    //码表
    private final char[] dict;

    public LuckyCodeService(char[] dict) {
        if (dict==null || dict.length!= DICT_LENGTH) {
            throw new IllegalArgumentException("码表不合法");
        }
        this.dict = dict;
    }

    //-----------
    /**
     * 序列号
     */
    private static final int SEQUENCE_BIT_LENGTH = 32;
    private static final long SEQUENCE_MASK = (1L << SEQUENCE_BIT_LENGTH) - 1;

    /**
     * 用户id
     */
    private static final int USER_ID_BIT_LENGTH = 28;
    private static final long USER_ID_MASK = (1L << USER_ID_BIT_LENGTH) - 1;
    private static final int USER_LEFT_SHIFT_BITS = SEQUENCE_BIT_LENGTH;

    /**
     * 填充位
     */
    private static final int PADDING_BIT_LENGTH = 4;
    private static final long PADDING_MASK = (1L << PADDING_BIT_LENGTH) - 1;
    private static final int PADDING_LEFT_SHIFT_BITS = USER_ID_BIT_LENGTH + SEQUENCE_BIT_LENGTH;

    private static final long PADDING_NUM = 0;

    /**
     * 根据userId和seq 生成一个新的唯一64bit值
     * @param userId
     * @param seq
     * @return
     */
    public long getUid(long userId, long seq) {
        long uid = PADDING_NUM << PADDING_LEFT_SHIFT_BITS | ((userId & USER_ID_MASK) << USER_LEFT_SHIFT_BITS) | seq & SEQUENCE_MASK;
        return uid;
    }

    /**
     * 转换为定长码
     * @param id
     * @return
     */
    public String getFixedLengthCode(long id) {
        //2.转换为字符串
        StringBuilder sb = new StringBuilder(CODE_LENGTH);
        for (int i=0; i<CODE_LENGTH; i++) {
            int mod = (int) (id & DICT_MASK);
            sb.append(dict[mod]);
            id = id >> DICT_BIT_LENGTH;
        }
        StringBuilder rs = sb.reverse();
        return rs.toString();
    }

    public String getCode(long id) {
        //2.转换为字符串
        StringBuilder sb = new StringBuilder(CODE_LENGTH);
        while (id != 0) {
            int mod = (int) (id & DICT_MASK);
            sb.append(dict[mod]);
            id = id >> DICT_BIT_LENGTH;
        }
        StringBuilder rs = sb.reverse();
        return rs.toString();
    }
}

测试类:

import io.mindflow.architect.luckcode.service.LuckyCodeService;
import org.junit.Before;
import org.junit.Test;

/**
 * @author Ricky Fung
 */
public class AppTest {

    private LuckyCodeService codeService;

    @Before
    public void init() {
        char[] dict = {'A', 'B', 'C', 'D',
                'E', 'F', 'G', 'H', 'J', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T',
                'U', 'V', 'W', 'X', 'Y', 'Z', '2', '3', '4', '5', '6', '7', '8', '9'};

        this.codeService = new LuckyCodeService(dict);
    }

    @Test
    public void testBatch() {
        long userId = 24000000l;
        long seq = 364000000l;

        for (int i=0; i<20; i++) {
            long uid = codeService.getUid(userId, seq+i);
            System.out.println(uid +"\t"+codeService.getFixedLengthCode(uid));
        }
    }
}

生成结果如下:

103079215468000000	AC5TYAAL5EN2A
103079215468000001	AC5TYAAL5EN2B
103079215468000002	AC5TYAAL5EN2C
103079215468000003	AC5TYAAL5EN2D
103079215468000004	AC5TYAAL5EN2E
103079215468000005	AC5TYAAL5EN2F
103079215468000006	AC5TYAAL5EN2G
103079215468000007	AC5TYAAL5EN2H
103079215468000008	AC5TYAAL5EN2J
103079215468000009	AC5TYAAL5EN2K
103079215468000010	AC5TYAAL5EN2L
103079215468000011	AC5TYAAL5EN2M
103079215468000012	AC5TYAAL5EN2N
103079215468000013	AC5TYAAL5EN2P
103079215468000014	AC5TYAAL5EN2Q
103079215468000015	AC5TYAAL5EN2R
103079215468000016	AC5TYAAL5EN2S
103079215468000017	AC5TYAAL5EN2T
103079215468000018	AC5TYAAL5EN2U
103079215468000019	AC5TYAAL5EN2V

小结

当然,这样生成的64bit id 针对同一个用户而言的话高32位 是一样,如果产品上要求同一用户高32位 也不一样,我们可以使用 时间戳 来替代userId。时间戳(秒为单位)占28bit 可供使用8年之久,足够活动使了。

针对时间戳 核心逻辑如下:

/**
 * 纪元, 2018-08-01 00:00:00
*/
private static final long epoch = new DateTime(2018, 8, 1, 0, 0, 0).getMillis();

long seconds = (System.currentTimeMillis() - epoch)/1000;
long uid = PADDING_NUM << PADDING_LEFT_SHIFT_BITS | ((seconds & TIMESTAMP_MASK) << TIMESTAMP_LEFT_SHIFT_BITS) | seq & SEQUENCE_MASK;

SpringBoot 2.x 实战:使用 Spring Cache

前言

Spring cache是对缓存使用的抽象,通过它我们可以在不侵入业务代码的基础上让现有代码即刻支持缓存。但是使用中,我发现Spring cache不支持对每个缓存key设置失效时间(只支持设置一个全局的失效时间),所以我产生了重复造轮子的冲动,于是就有了这篇文章

Spring cache 简介

Spring cache主要提供了以下几个Annotation:

注解 适用的方法类型 作用
@Cacheable 读数据 方法被调用时,先从缓存中读取数据,如果缓存没有找到数据,再调用方法获取数据,然后把数据添加到缓存中
@CachePut 写数据:如新增/修改数据 调用方法时会自动把相应的数据放入缓存
@CacheEvict 删除数据 调用方法时会从缓存中移除相应的数据

Spring cache官方文档里有专门解释它为什么没有提供设置失效时间的功能:

How can I set the TTL/TTI/Eviction policy/XXX feature?
Directly through your cache provider. The cache abstraction is…​ well, an abstraction not a cache implementation. The solution you are using might support various data policies and different topologies which other solutions do not (take for example the JDK ConcurrentHashMap) - exposing that in the cache abstraction would be useless simply because there would no backing support. Such functionality should be controlled directly through the backing cache, when configuring it or through its native API.

Spring cache只是对缓存的抽象,并不是缓存的一个具体实现。不同的具体缓存实现方案可能会有各自不同的特性。Spring cache作为缓存的抽象,它只能抽象出共同的属性/功能,对于无法统一的那部分属性/功能它就无能为力了,这部分差异化的属性/功能应该由具体的缓存实现者去配置。如果Spring cache提供一些差异化的属性,那么有一些缓存提供者不支持这个属性怎么办? 所以Spring cache就干脆不提供这些配置了。

这就解释了Spring cache不提供缓存失效时间相关配置的原因:因为并不是所有的缓存实现方案都支持设置缓存的失效时间。

相关资料

分布式锁的几种实现方式

前言

随着互联网技术的不断发展,数据量的不断增加,业务逻辑日趋复杂,在这种背景下,传统的集中式系统已经无法满足我们的业务需求,分布式系统被应用在更多的场景,而在分布式系统中访问共享资源就需要一种互斥机制,来防止彼此之间的互相干扰,以保证一致性,在这种情况下,我们就需要用到分布式锁。

分布式锁的实现

针对分布式锁的实现,目前比较常用的有以下几种方案:

  • 基于数据库实现分布式锁;
  • 基于缓存(redis,memcached,tair)实现分布式锁;
  • 基于Zookeeper实现分布式锁;

详细请参考 分布式锁的几种实现方式

开源实现

Redis Lua编程与调试工具使用

前言

Redis自2.6.0版本开始内置Lua解释器。Lua,轻量级脚本语言,号称最快的脚本语言。两者结合将爆发出巨大的威力。

简介

Redis Lua脚本可以调用原生的Redis命令,也可以包含自定义的控制流、逻辑运算、数学运算等,将复杂的业务逻辑封装起来,形成一个原子事务。
这些特性使我们可以自由地扩展Redis,封装“自定义命令”。

与MULTI+EXEC对比

使用MULTI+EXEC 及相关组合命令,也可以将多个命令封装成事务,但灵活性不如Lua脚本。
除此之外,MULTI+EXEC需要多次向Redis server发送事务命令,每次发送都会有RTT(Round Trip Time)消耗,性能低于Lua脚本。

Redis Lua Scripts Debugger (LDB)

从 Redis 3.2 版本开始, Redis 将内置一个完整的 Lua 调试器, 它的存将会让编写复杂的 Lua 脚本变得容易。

使用教程

1. 编写Lua脚本

local key = KEYS[1] --限流KEY(一秒一个)
local limit = tonumber(ARGV[1])        --限流大小
local current = tonumber(redis.call('get', key))
if current + 1 > limit then --如果超出限流大小
    return 0
else  --请求数+1,并设置2秒过期
    redis.call("INCRBY", key,"1")
    redis.call("expire", key,"2")
    return 1
end

2.开始调试

用户可以通过以下步骤来开启一个新的调试会话:

  • 使用编辑器创建你的脚本。让我们假设你的脚本位于 /tmp/script.lua 。
  • 使用以下命令开启调试会话: redis-cli --ldb --eval /tmp/script.lua 。

注意, 在使用 redis-cli 客户端的 -eval 选项的时候, 你可以将需要传递给脚本的键名以及参数一并提供给客户端, 其中键名和参数之间使用一个逗号来进行分割, 就像这样:

redis-cli --ldb --eval /tmp/script.lua mykey somekey , arg1 arg2

在执行这条命令之后, redis-cli 就会进入特殊的调试模式, 它不再接受普通的 Redis 命令, 而是会打印出一个帮助界面, 并将用户键入的调试命令原原本本地发送给 Redis 服务器。

进入了调试模式的 redis-cli 将提示用户使用以下三个命令:

  • quit —— 结束调试回话。 调试器将移除所有断点, 跳过所有未执行语句, 并最终退出 redis-cli 。
  • restart —— 重新载入脚本文件, 并重头开始一个新的调试会话。 用户在调试的过程中, 通常会在调试之后对脚本进行修改, 然后通过执行 restart 来对修改后的脚本继续进行调试, 这个步骤一般会迭代发生多次。
  • help —— 打印出可用的调试命令。

相关资料

Spring MVC/Boot 统一统一认证授权最佳实践

技术方案

采用JWT + Spring MVC 拦截器来实现用户身份认证与授权。

代码实现

1.自定义注解

RequiredAuth

import java.lang.annotation.*;

/**
 * @author Ricky Fung
 */
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
@Inherited
public @interface RequiredAuth {

}

OptionalAuth

/**
 * @author Ricky Fung
 */
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
@Inherited
public @interface OptionalAuth {

}

2.权限拦截器

AuthInterceptor.java

import org.apache.commons.lang3.StringUtils;
import org.springframework.core.annotation.AnnotationUtils;
import org.springframework.web.method.HandlerMethod;
import org.springframework.web.servlet.handler.HandlerInterceptorAdapter;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

/**
 * @author Ricky Fung
 */
public class AuthInterceptor extends HandlerInterceptorAdapter {
    private static final String AUTHORIZATION_HEADER = "Authorization";

    @Override
    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
        if (!(handler instanceof HandlerMethod)) {
            return true;
        }

        HandlerMethod handlerMethod = (HandlerMethod) handler;
        RequiredAuth requiredAuth = AnnotationUtils.findAnnotation(handlerMethod.getMethod(), RequiredAuth.class);
        if (requiredAuth != null) {
            return checkAuth(request, true);
        }
        OptionalAuth optionalAuth = AnnotationUtils.findAnnotation(handlerMethod.getMethod(), OptionalAuth.class);
        if (optionalAuth != null) {
            return checkAuth(request, false);
        }
        return true;
    }

    @Override
    public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) throws Exception {
        //clear
        RequestContextHolder.clear();
    }

    private boolean checkAuth(HttpServletRequest request, boolean required) {
        String token = request.getHeader(AUTHORIZATION_HEADER);
        if (StringUtils.isEmpty(token) && required) {
            throw new InvalidTokenException("token为空");
        }
        RequestContext context;
        try {
            //JWT
            context = decodeJwt(token);
        } catch (Exception e) {
            if (required) {
                throw new InvalidTokenException("token非法", e);
            }
            context = new DefaultRequestContext(null, null);
        }
        RequestContextHolder.setContext(context);
        return true;
    }

    private RequestContext decodeJwt(String token) throws Exception {
        //JWT
        RequestContext context = new DefaultRequestContext(0L, "");
        return context;
    }
}

3.全局异常处理器

GlobalExceptionHandler.java

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.web.bind.annotation.ControllerAdvice;
import org.springframework.web.bind.annotation.ExceptionHandler;
import org.springframework.web.bind.annotation.ResponseBody;

/**
 * @author Ricky Fung
 */
@ControllerAdvice
public class GlobalExceptionHandler {
    private static final Logger LOGGER = LoggerFactory.getLogger(GlobalExceptionHandler.class);

    @ExceptionHandler
    @ResponseBody
    public ApiResult exceptionHandler(InvalidTokenException ex) {
        return ApiResult.buildFailureResult(403, "请先登录");
    }

    @ExceptionHandler
    @ResponseBody
    public ApiResult exceptionHandler(IllegalArgumentException ex) {
        return ApiResult.buildInvalidParameterResult(ex.getMessage());
    }

    @ExceptionHandler
    @ResponseBody
    public ApiResult exceptionHandler(Exception ex) {
        LOGGER.error("系统异常", ex);
        return ApiResult.buildSystemErrorResult();
    }
}

4. 其它

RequestContext

/**
 * @author Ricky Fung
 */
public interface RequestContext {

    Long getUserId();

    String getNickname();

}

DefaultRequestContext

/**
 * @author Ricky Fung
 */
public class DefaultRequestContext implements RequestContext {
    private Long userId;
    private String nickname;

    public DefaultRequestContext(Long userId, String nickname) {
        this.userId = userId;
        this.nickname = nickname;
    }

    @Override
    public Long getUserId() {
        return this.userId;
    }

    @Override
    public String getNickname() {
        return this.nickname;
    }
}

RequestContextHolder

/**
 * @author Ricky Fung
 */
public class RequestContextHolder {
    private static final ThreadLocal<RequestContext> THREAD_LOCAL = new ThreadLocal<>();

    public static void setContext(RequestContext context) {
        THREAD_LOCAL.set(context);
    }

    public static RequestContext getContext() {
        return THREAD_LOCAL.get();
    }

    public static void clear() {
        THREAD_LOCAL.remove();
    }
}
/**
 * @author Ricky Fung
 */
public class InvalidTokenException extends RuntimeException {

    public InvalidTokenException() {
    }

    public InvalidTokenException(String message) {
        super(message);
    }

    public InvalidTokenException(String message, Throwable cause) {
        super(message, cause);
    }
}
/**
 * @author Ricky Fung
 */
public class ApiResult<T> {
    public static final int SUCCESS_CODE = 1;
    public static final String SUCCESS_MESSAGE = "OK";

    /**  **/
    public static final int INVALID_REQUEST_PARAMETER = 400;
    public static final int SYSTEM_ERROR_CODE = 500;

    private int code;
    private String message;
    private T data;

    public ApiResult() {
    }

    public ApiResult(int code, String message, T data) {
        this.code = code;
        this.message = message;
        this.data = data;
    }

    /**
     * 请求成功
     * @param <T>
     * @return
     */
    public static <T> ApiResult buildSuccessResult() {
        return new ApiResult(SUCCESS_CODE, SUCCESS_MESSAGE, null);
    }
    /**
     * 请求成功
     * @param data
     * @param <T>
     * @return
     */
    public static <T> ApiResult buildSuccessResult(T data) {
        return new ApiResult(SUCCESS_CODE, SUCCESS_MESSAGE, data);
    }

    /**
     * 参数非法
     * @param message
     * @param <T>
     * @return
     */
    public static <T> ApiResult buildInvalidParameterResult(String message) {
        return new ApiResult(INVALID_REQUEST_PARAMETER, message, null);
    }

    /**
     * 定制错误(有特定业务含义)
     * @param code
     * @param message
     * @param <T>
     * @return
     */
    public static <T> ApiResult buildFailureResult(int code, String message) {
        return new ApiResult(code, message, null);
    }

    public static <T> ApiResult buildFailureResult(int code, String message, T data) {
        return new ApiResult(code, message, data);
    }

    /**
     * 系统异常
     * @param <T>
     * @return
     */
    public static <T> ApiResult buildSystemErrorResult() {
        return new ApiResult(SYSTEM_ERROR_CODE, "服务器开小差了,请稍后再试!", null);
    }

    public boolean isSuccess() {
        return code == SUCCESS_CODE;
    }

    public int getCode() {
        return code;
    }

    public String getMessage() {
        return message;
    }

    public T getData() {
        return data;
    }
}

5.使用

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

/**
 * @author Ricky Fung
 */
@RestController
@RequestMapping("/api/demo")
public class DemoController {

    @RequiredAuth
    @GetMapping
    public ApiResult hello() {
        
        return ApiResult.buildSystemErrorResult();
    }
}

Java货币、金额格式化小结

金融相关领域日常开发中经常会有数字、货币金钱等格式化需求,货币保留几位小数,货币前需要加上货币符号等需求。

1.数据库表

CREATE TABLE `investment_cmpt_detail` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键',
`activity_id` int(10) NOT NULL COMMENT '活动id',
`user_id` bigint(20) unsigned NOT NULL COMMENT '用户id',
`name` varchar(45) NOT NULL COMMENT '用户真实姓名',
`buy_amount` decimal(10,2) NOT NULL COMMENT '购买理财计划金额',
`buy_record_detail_id` bigint(20) unsigned NOT NULL COMMENT '购买理财计划详情id',
`buy_time` datetime DEFAULT NULL COMMENT '购买时间',
`create_time` datetime DEFAULT NULL COMMENT '创建时间',
`update_time` datetime DEFAULT NULL COMMENT '更新时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_user_id` (`user_id`,`activity_id`,`buy_record_detail_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='规模擂台赛用户投资明细表';

MySQL数据库中可以使用 decimal存储小数,比double更精确。

保留N位小数

java中提供了 BigDecimal类来方便操作。

    public static BigDecimal valueOf(double num) {
        return new BigDecimal(Double.toString(num));
    }

    public static BigDecimal valueOf(float num) {
        return new BigDecimal(Float.toString(num));
    }

    public static BigDecimal valueOf(int num) {
        return new BigDecimal(Integer.toString(num));
    }

    //-------------
    public static BigDecimal setScale(BigDecimal bd, int scale) {
        return bd.setScale(scale, BigDecimal.ROUND_HALF_UP);
    }

    public static BigDecimal setScale(BigDecimal bd, int scale, int roundingMode) {
        return bd.setScale(scale, roundingMode);
    }

完整代码如下:

import java.math.BigDecimal;

/**
 * @author Ricky Fung
 */
public abstract class DecimalUtils {

    //------------
    public static BigDecimal valueOf(double num) {
        return new BigDecimal(Double.toString(num));
    }

    public static BigDecimal valueOf(float num) {
        return new BigDecimal(Float.toString(num));
    }

    public static BigDecimal valueOf(int num) {
        return new BigDecimal(Integer.toString(num));
    }

    //-------------
    public static String format(BigDecimal bd, int scale) {
        return bd.setScale(scale, BigDecimal.ROUND_HALF_UP).toString();
    }

    public static BigDecimal setScale(BigDecimal bd, int scale) {
        return bd.setScale(scale, BigDecimal.ROUND_HALF_UP);
    }

    public static BigDecimal setScale(BigDecimal bd, int scale, int roundingMode) {
        return bd.setScale(scale, roundingMode);
    }

    //-----------
    public static BigDecimal add(double v1, double v2) {
        BigDecimal b1 = new BigDecimal(Double.toString(v1));
        BigDecimal b2 = new BigDecimal(Double.toString(v2));
        return b1.add(b2);
    }

    public static BigDecimal sub(double v1, double v2) {
        BigDecimal b1 = new BigDecimal(Double.toString(v1));
        BigDecimal b2 = new BigDecimal(Double.toString(v2));
        return b1.subtract(b2);
    }

    public static BigDecimal mul(double v1, double v2) {
        BigDecimal b1 = new BigDecimal(Double.toString(v1));
        BigDecimal b2 = new BigDecimal(Double.toString(v2));
        return b1.multiply(b2);
    }

    public static BigDecimal div(double v1, double v2) {
        BigDecimal b1 = new BigDecimal(Double.toString(v1));
        BigDecimal b2 = new BigDecimal(Double.toString(v2));
        return b1.divide(b2,2, BigDecimal.ROUND_HALF_UP);//四舍五入,保留两位小数
    }
}

格式化

1.以货币符号打头

    @Test
    public void testMoney() {
        NumberFormat nf = new DecimalFormat("¥##.####");
        Double d = 87654.4545454;
        String str = nf.format(d);
        System.out.println(str);
    }

输出结果:
¥87654.4545

2.数字保留2位小数且三位三位的隔开

    @Test
    public void testMoney() {
        NumberFormat nf = new DecimalFormat("#,###.##");
        Double d = 87654321.456;
        String str = nf.format(d);
        System.out.println(str);
    }

输出结果:
87,654,321.46

Git Flow

什么是Git Flow

Git Flow实际上是一种软件项目管理模型,由大牛Vincent Driessen提出,核心**如所图 1示。从中可以看出,主分支有master、develop两个组成,分别用于产品发布、功能开发;余下的三个辅助分支——hotfixes、release branches、feature branches,分别用于已发版本的bug修复、新版QA发布、新功能开发。

Git Flow

主分支

git 的核心在开发模式上受到了现有模式的极大启发,中心仓库在整个生命周期保持了两个主要的分支:

  • master
  • develop

每个 Git 用户都对在 origin 的 master 分支很熟悉。 跟 master 分支并行的是另一个称为 develop 的分支。
我们称 origin/master 为主分支,这个分支源码的 HEAD 一直指向 可用于生产环境 的状态。
我们称 origin/develop 为主分支,这个分支源码的 HEAD 总是反映下一个版本的最新开发状况。有些人称这个分支为 "整合分支" 。所有的每日自动构建都是从这儿构建的。

当 develop 分支上的源代码达到一个稳定点并准备发布时, 所有的更改都应该以某种方式合并回 master 分支, 然后使用发行版本进行标注。 接下来将从细节上讨论这是如何完成的。

因此, 每次将变更合并回 master分支时, 这是一个 根据定义 的新产品发布。 我们趋向于对此非常严格,因此理论上来讲, 我们可以在每次提交到 master 分支时, 使用一个 Git 钩子脚本来自动完成构建和发行我们的软件到生产服务器。

辅助分支

在讨论完 master 分支和 develop 分支后,将要讨论的多样化的辅助分支,支持成员间并行开发, 轻松跟踪功能开发、生产版本发布、还能快速修复生产环境中产生的 Bug 。和主分支不同的是,辅助分支只有有限的生命周期,通常在完成使命后会被删除。

可能用到的辅助分支分类有:

  • 功能分支
  • 发布分支
  • 修复 Bug 分支

每个分支都有特殊的用途。这些分支的来源分支和他们要合并回的分支都有严格的定义。在后面我们再具体讨论。

技术上,辅助分支都没有特殊含义。我们就是根据具体使用功能对辅助分支进行分类。辅助分支和普通的 Git 分支没有区别。

功能分支

功能分支可能源自于 develop 分支,功能分支必须合并回develop 分支。

功能分支命名惯例:feature-*

功能分支(或称为特性分支)是被用来开发新功能的,这些新功能是要即将上线或更长时间后发布的。功能分支创建后开始开发时,之后将要合并的时间点是不知道的。功能分支的精髓是伴随开发过程一直存在,但是肯定会被合并回 develop (在下一个预期的发布版本中清晰的添加新功能 ) 或被丢弃 (万一实验不尽如人意)。

创建一个功能分支

当我们开始写一个新的功能时,请从 develop 分支中切换出来

$ git checkout -b feature-xxx develop
Switched to a new branch "feature-xxx"

发布分支

该分支可能源自于develop 分支,必须合并到develop 和 master 分支

分支命名习惯:release-*

发布分支(Release branches) 支持新产品发布的准备。 它们允许在最后一刻追求细节。此外,它们允许小错误修复以及为发布准备元数据(版本号,构建日期等)。通过在发布分支上做的这些工作, develop 分支被清除以接收下一个大版本的功能特性。

从 develop 分支检出一个新发布分支的重要时刻就是当开发(基本上)反映了新版预期状态的时候。 至少,在那时,所有以『即将构建的发布版』( release-to-be-built )为目标的功能特性必须合并回 develop 分支。 针对未来版本的所有功能则可能不会 —— 它们必须等到发布分支检出以后才可以这么做。

正是在发布分支的开始,即将发布的版本才会被分配一个版本号 —— 一个前所未有的版本号。直到那一刻,develop 分支才反映了『下一版』的变更,但在发布分支开始前,对于『下一版』最终会是 0.3 版还是 1.0 版仍然是不明确的。该决定是在发布分支开始时进行的,并且由项目关于版本号碰撞的规则来执行。

创建一个发布分支

发布分支源于 develop 分支。举个栗子,假设我们当前发布的产品版本为 1.1.5 ,并且即将发布一个新的大版本。 develop 分支已经为『下一版』做好了准备,我们决定把版本号改成 1.2(而不是 1.1.6 或者 2.0)。那么,我们要做的只是检出发布分支并给它一个可以反映版本号的名字:

git checkout -b release-1.2 develop
Switched to a new branch "release-1.2"

直到确定会推出发布版的这段时间里,新分支都会一直存在。在此期间,bug 修复可能会应用到这个分支上(而不是 develop 分支)。 严禁在此分支添加大的新功能特性。 这些分支必须合并回 develop 分支,然后等待下一个大版本的到来。

完成并发布你的分支

当你真的准备好要发布分支的时候,还需要执行一些别的操作。首先,发行版必须合并进 master 分支中(一定确保每次提交到 master 分支的都是最新的版本)。接下来,请一定标记对 master 分支的更新记录,用于以后查看该版本时进行参考。最后, 发布新分支所做的更改需要重新合并为 develop 分支,以确保以后的版本也修复了这些错误。

Git 中的执行以下两个步骤:

$ git checkout master
Switched to branch 'master'
$ git merge --no-ff release-1.2
Merge made by recursive.
(Summary of changes)
$ git tag -a 1.2

至此,这个版本已经完成修改,并且用作将来的参考版本。

为了保持发布分支所做的更改一致,我们需要将这些更改合并到 develop 分支中。在 Git 中执行:

$ git checkout develop
Switched to branch 'develop'
$ git merge --no-ff release-1.2
Merge made by recursive.
(Summary of changes)

这一步可能会导致合并冲突(可能是因为我们已经更改了版本号)。如果是这样,尝试修复它并再次提交。

现在我们已经完成了所有步骤,发布分支可以被删除了,因为我们不再需要它了:

$ git branch -d release-1.2
Deleted branch release-1.2 (was ff452fe).

热修复分支

分支可能来自于:master
必须合并到:develop 和 master
分支命名惯例:hotfix-*

热修复(hotfix)分支和发布(release)分支很像,因为它们都意味着即将有新的生产版本发布,尽管不是意料之中的。它们产生的原因是现有的生产版本出现了不受欢迎的情况,从而必须立即起作用。当生产版本中的一个严重的 bug 必须马上被修复,热修复分支可能从 master 分支上用于标记生产版本的对应标签上创建分支。

其本质是当有人准备对生产版本进行快速修复时,团队成员(在 develop 分支)可以继续工作。

创建修复 bug 分支

修复 bug 分支创建于 master 分支。 例如,1.2版本是当前生产环境的版本并且有 bug 。但是 develop 分支上的修改还不够稳定。这时我们可以创建一个修复 bug 分支来解决这个问题:

$ git checkout -b hotfix-1.2.1 master
Switched to a new branch "hotfix-1.2.1"

相关链接

mac下使用ll等命令

添加方法:

apple$ vim ~/.bash_profile

输入以下内容:

alias ll='ls -alF'
alias la='ls -A'
alias l='ls -CF'

保存完成之后,重新load一下,命令如下:

apple$ source ~/.bash_profile

OK,现在就可以使用ll命令,如下:

appledeMacBook-Pro-181:activity apple$ ll
total 40
drwxr-xr-x  16 apple  staff    512 Nov 12 17:40 ./
drwxr-xr-x  22 apple  staff    704 Nov  6 18:56 ../
drwxr-xr-x  15 apple  staff    480 Nov 13 11:42 .git/
-rw-r--r--   1 apple  staff    440 Sep  5 11:30 .gitignore
drwxr-xr-x   9 apple  staff    288 Nov 13 12:03 .idea/
-rw-r--r--   1 apple  staff     20 Nov 12 17:40 README.md
drwxr-xr-x   5 apple  staff    160 Nov 13 11:42 activity-admin/
drwxr-xr-x   4 apple  staff    128 Nov 13 11:42 activity-common/
drwxr-xr-x   5 apple  staff    160 Nov 13 11:42 activity-core/
drwxr-xr-x   5 apple  staff    160 Nov 13 11:42 activity-jms/
drwxr-xr-x   9 apple  staff    288 Nov 13 11:42 activity-mobile/
drwxr-xr-x   4 apple  staff    128 Nov 13 11:42 activity-mysql/
drwxr-xr-x   4 apple  staff    128 Nov 13 11:42 activity-redis/
drwxr-xr-x   5 apple  staff    160 Nov 13 11:42 activity-schedule/
drwxr-xr-x   8 apple  staff    256 Nov 13 11:42 activity-web/
-rw-r--r--   1 apple  staff  10467 Sep 11 14:52 pom.xml

Mac 开发必备神器

文本编辑器

IDE

Java 开发IDE:

Web Debugging

Web开发调试、抓包工具:

Homebrew

Homebrew是一款Mac OS平台下的软件包管理工具,拥有安装、卸载、更新、查看、搜索等很多实用的功能。简单的一条指令,就可以实现包管理,而不用你关心各种依赖和文件路径的情况,十分方便快捷。

Terminal

Mac OS 终端利器

Git

Git 客户端工具:

Host管理

SwitchHostshttps://github.com/oldj/SwitchHosts

MySQL

MySQL 客户端管理工具:

Redis

Redis 客户端管理工具,个人比较推荐:

科学上网

科学上网工具,这个想必大家都知道了,这里不多做解释了,支持VPN,SSL等等。

年薪千万的华为副总裁离职工作感悟

-------转自:http://www.myzaker.com/article/58a927ac1bc8e05431000000/

原文

正非兄:

转眼工作十年了,在华为的十年,正是华为从名不出专业圈子到现在成为路人皆知的大公司,高速发展的十年,见证了公司多年的奋斗历程。也投身其中,在大潮中边学边游泳,走到今天。

现在我要离开公司了,准备去开始新的事业,接受全新的挑战,我将要去做的事情,风险很大,很有可能是九死一生,九死后还能不能有一生,也难说。在开始新的事业之前,想起了对过去的十年做个一个详细的总结。在一个像华为这样高速发展的大企业工作,有时是一种炼狱般的锻炼,如果我能够总结十年的经验和教训,从中学到关键的做事、做人的道理,我想对将来一定大有益处。

这些年来有些人离开公司,写一些东西或书,对公司指手画脚、评头论足、指点江山,对公司的高层领导逐个点评一番,我个人感觉除了带来一些娱乐价值,还有什么益处呢?公司照样在发展,发展的背后,6万人种种梦想、努力、贡献、牺牲、奋斗、抱怨、不满、沉淀、离去、希望、失落;发展的背后,种种机会、重大决策、危机、失误等等的内在逻辑又岂是局外人说得清楚?我不想多说公司,只是想对自己的工作经历好好反思反思,想想自己做了什么努力,做了什么贡献,做了什么自己最高兴、做了什么自己最受益、学到了什么?

总得说来,我在华为的十年是懵懵懂懂过来的,当初我好像没有什么远大的理想、没有详细的规划,只是想着把一件一件事情做好。通过自己的总结和反思,将来我希望自己能够更加有规划、更加清晰一点。

大概想了想,我觉得有以下几点,是这些年深有体会的经验和教训,值得今后再发扬。

一、“从小事做起,学会吃亏,与他人合作”

这是研究生毕业前最后一堂课,电子电路的老师最后送给我们几句话,虽然我忘了这位老师的名字,但这几句话却至今铭记。在华为的工作实践,越发感受到这简单的几条的道理深刻。

从小事做起不是一直满足于做小事,也不是夸夸其谈好高骛远。学会吃亏不是忍受吃亏,是不斤斤计较于一时一地的是非得失,是有勇气关键时候的放弃。

二、“心有多大,舞台就有多大”

我们很多的成功,来自于敢想,敢做。

就象我第一次接到问题单,根本不懂,但敢去试,敢去解决,还真的解决了;就像我们做SPES,即使没人、没技术、没积累,还有CISCO等大公司也在做,我们也敢做,敢推行,不盲目崇拜或畏惧权威,也取得了成功。

当然,这不只是盲目的胆大,心大还意味着积极地关注广大的外部世界,开阔宽容的心胸接受种种新鲜事物。

三、“好好学习,天天向上”

这句话用来形容对IT人的要求,最贴切不过了。真正的成功者和专家都是“最不怕学习”的人,啥东西不懂,拿过来学呗。我们IT现在有个技术大牛谭博,其实他不是天生大牛,也是从外行通过学习成为超级专家的,他自己有一次跟我说,当年一开始做UNIX系统管理员时,看到#提示符大吃一惊,因为自己用过多年在UNIX下搞开发都是%提示符,从未有过管理员权限。

看看专家的当初就这水平!当年跟我做备份项目时,我让他研究一下ORALCE数据库时点回退的备份和恢复方法,他望文生义,以为数据库的回退是象人倒退走路一样的,这很有点幽默的味道了。

但他天天早上起来,上班前先看一小时书,多年积累下来,现在在系统、数据库、开发等多个领域已成为没人挑战的超级专家了。但是,学习绝对不是光从书本学习,其实更重要的是从实践工作中学习,向周边学习。

比如说我在华为觉得学到最重要的一个理念是“要善于利用逆境”,华为在冬天的时候没有天天强调困难,而是提出“利用冬天的机会扭转全球竞争格局”并真的取得成功,如果没有这个冬天,华为可能还要落后业界大腕更多年份;华为在被CISCO起诉时没有慌乱,而是积极应对,利用了这次起诉达到了花几亿美金可能达不到的提高知名度的效果。等等这些,把几乎是灭顶之灾的境遇反而转化为成功的有利条件,对我留下的印象十分深刻,也对公司高层十分佩服。

四、勇于实践,勇于犯错,善于反思

很多事情知易行难,关键是要有行动,特别是管理类的一些理论、方法、观念。空谈、空规划一点用处都没有,不如实际把它做出来,做出来后不断反思改进,实实在在最有说服力。没有实践中的反复演练和反思,即使是人人皆知的东西要做好都其实不容易。

举个小例子,比如做管理者要会倾听,我想华为99.9%的管理者都很懂这一点,但实际做的如何呢?华为有多少管理者做到了不打断别人讲话?不急于下结论给定义?不急于提供解决方案?有多少管理者能够做到自然地引导对方表达?问问对方感受?确认自己明白对方?

五、要有方法、有套路,对问题系统思考、对解决方案有战略性的设计

在前几年的工作中,由于取得了一点成功,技术上也有了一点研究,就开始夜郎自大起来了,后来公司化重金请来了大批顾问,一开始对有些顾问还真不怎么感冒。后来几年公司规模越来越大、IT的复杂性越来越增加的情况下,逐渐理解了很多。

西方公司职业化的专家,做任何事情都有方法论、有套路,甚至于如何开一个会都有很多套路,后来我对这些套路的研究有了兴趣,自己总结出了不少套路并给部门的骨干培训和讨论。

在一个复杂的环境下,很多问题已经不能就事论事来研究和解决,非常需要系统性的方法和战略性的眼光。

对于一个组织的运作来讲,制度和流程的设计尤其需要这一点。爱因斯坦说过:

We can‘t solve problems by using the same kind of thinking we used when we created them.

六、独立思考,不人云亦云

公司大了,人多了,混日子也容易了。人很容易陷入随波逐流、不深入业务的境地,而看不到问题和危险。专家有过一个研究,雪崩发生时,一般受害者都是一批一批的,很少有单个人的受害者,原因很简单,单个人在雪崩多发地会相当小心和警觉。

但一个群体,群体越大,每个个体就会有一种虚幻的安全感和人云亦云的判断,但现实是不管群体的力量有多大,雪崩都是不可抵抗的。因此我觉得在大的机构里,保持独立思考的能力尤为重要。

七、少抱怨、少空谈、积极主动,多干实事

我曾经是个抱怨很多的愤青,经常容易陷入抱怨之中。但多年的工作使得我有所转变,因为知道了抱怨是最无济于事的。世界上永远有不完美的事情,永远有麻烦,唯一的解决之道是面对它,解决它。

做实实在在的事情,改变我们不满的现状,改变我们不满的自己。实际上也有很多值得抱怨的事情都是我们自己一手搞出来的,比如社会上很常见的是高级干部退下来了,抱怨人心不古、感慨世态炎凉,如果好好去探究一下,原因很可能是他权位在手春风得意时不可一世、视他人如粪土造成的。

八、对职业负责、对目标负责,对自己负责,成功者往往自觉自律、信守承诺、心无旁骛

大企业肯定会有绩效考核、会有论功行赏、会有KPI、会有领导指示、甚至会有一点企业政治,但如果我们片面地追求考核成绩、片面追求KPI指标、片面追求权钱利益,片面地对上负责、对别人负责,而不对自己负责、不对自己的目标负责,失去工作的使命感、责任心、热情和好奇心,必将不能达到自己的最佳境界。

而一个企业如何能够成功营造一个环境,让每个个体尽量发挥到最佳境界,企业也会战无不胜。

杰出成就的取得离不开对美的境界的追求,最伟大的科学发现,往往蕴涵着秩序、简洁和美。

缺乏一点审美的追求,什么UGLY的事情都敢做、不择手段、凡事凑合,一点都不“高雅”,必将不能长久。

九、多点人文修养和审美情趣,看起来与工作不怎么相关,其实太相关了

杰出成就的取得离不开对美的境界的追求,最伟大的科学发现,往往蕴涵着秩序、简洁和美。

缺乏一点审美的追求,什么 UGLY 的事情都敢做、不择手段、凡事凑合,一点都不 " 高雅 ",必将不能长久。

十、" 大家好,才是真的好 ",关注人,帮助人,真诚待人,厚道做人

快速发展的现代社会,由于媒体的作用,过分渲染了人与人之间日益冷漠、诡诈的关系,但实际的社会、社区可能真的不是那么回事,起码我来华为之前,对一个大企业中工作的人事关系开始还有点未知的恐惧,但实际上在这个集体中的感觉几乎人人都能开放、真诚相待,关系融洽和谐。

所以关键是我们自己要能够真诚对待他人,在与他人互动中将心比心。当然,工作中的冲突是不可避免的,实际上冲突也是没有必要去避免,甚至很多冲突对组织来讲,是大有益处的。就象夫妻吵一架后感情往往更好。

只要我们掌握两大原则:

1 ) 对事不对人。

2 ) 与人为善。就肯定能把适度的冲突引导到对自己、对组织都有利的方向。

十一、开放和分享的态度

在一个高科技公司工作,如果报着保守和封闭的心态,成长肯定会受阻。

十二、做好时间管理

在华为工作十年,3650 天,工作日 3000 天左右,这些时间是不是花在最重要的事情上了,有效的、有产出的工作时间究竟有多少,实在值得怀疑。

时间管理是我在华为工作当中最大的教训之一,可能也是公司整体性的问题,工作缺乏计划,经常是面临不断的被打断 ; 或者是不断去打断同事下属 ; 或者是不断的会议、讨论,占去绝大部分的时间 ; 或者是被自己的兴趣所牵引,花大量时间搞一些不着边际的事情 ; 或者是花很多时间在一些细枝末节的事情上,把很难很重要的事情一直拖到非解决不可的地步然后被迫仓促行事。

现在回想,如果真的能管理好这十年时间,我觉得成就应该大很多。

基于Redis的分布式锁实现

本文讲解如何基于redis实现分布式锁。

运行时依赖

  • spring-data-redis
  • JDK 1.8+
  • redis 3.x

代码实现

1.DistributedLock.java

import java.util.concurrent.TimeUnit;

/**
 * @author Ricky Fung
 */
public interface DistributedLock {

    void lock();

    boolean tryLock(long waitTime, long leaseTime, TimeUnit unit) throws InterruptedException;

    boolean isLocked();

    boolean isHeldByCurrentThread();

    void forceUnlock();

    void unlock();
}

2.RedisDistributedLock.java

import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.core.script.DefaultRedisScript;

import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;


/**
 * @author Ricky Fung
 */
public class RedisDistributedLock implements DistributedLock {
    /** 唯一标识id **/
    private String requestId;
    /** 锁名称 **/
    private String lockKey;

    private RedisTemplate redisTemplate;
    private final List<String> keys = new ArrayList<>(1);

    // 添加操作
    private DefaultRedisScript<Long> lockScript;

    // 删除操作
    private DefaultRedisScript<Long> unlockScript;

    // 查询
    private DefaultRedisScript<Long> heldByScript;

    RedisDistributedLock(String requestId, String lockKey, RedisTemplate redisTemplate) {
        this.requestId = requestId;
        this.lockKey = lockKey;
        this.redisTemplate = redisTemplate;
        //
        this.keys.add(this.lockKey);
        //init
        init();
    }

    private void init() {
        // Lock script
        lockScript = new DefaultRedisScript<>();
        lockScript.setScriptText(RedisScriptProvider.getInstance().getLockScript());
        lockScript.setResultType(Long.class);

        // unlock script
        unlockScript = new DefaultRedisScript<>();
        unlockScript.setScriptText(RedisScriptProvider.getInstance().getUnlockScript());
        unlockScript.setResultType(Long.class);

        //held by current thread script
        heldByScript = new DefaultRedisScript<>();
        heldByScript.setScriptText(RedisScriptProvider.getInstance().getHoldByScript());
        heldByScript.setResultType(Long.class);
    }

    @Override
    public void lock() {
        while (true) {
            boolean success = tryAcquire(30L, TimeUnit.SECONDS);
            if (success) {
                return;
            }
            try {
                TimeUnit.MILLISECONDS.sleep(100);
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
            }
        }
    }

    @Override
    public boolean tryLock(long waitTime, long leaseTime, TimeUnit unit) throws InterruptedException {
        long time = unit.toMillis(waitTime);
        long current = System.currentTimeMillis();
        boolean res = tryAcquire(leaseTime, unit);
        // lock acquired
        if (res) {
            return true;
        }
        time -= (System.currentTimeMillis() - current);
        if (time <= 0) {
            return false;
        }
        while (true) {
            long currentTime = System.currentTimeMillis();
            res = tryAcquire(leaseTime, unit);
            // lock acquired
            if (res) {
                return true;
            }
            time -= (System.currentTimeMillis() - currentTime);
            if (time <= 0) {
                return false;
            }
        }
    }

    @Override
    public boolean isLocked() {
        return redisTemplate.hasKey(lockKey);
    }

    @Override
    public boolean isHeldByCurrentThread() {
        Long update = (Long) redisTemplate.execute(heldByScript, keys, getLockValue());
        if (update==1) {
            return true;
        }
        return false;
    }

    @Override
    public void forceUnlock() {
        redisTemplate.delete(lockKey);
    }

    @Override
    public void unlock() {
        Long update = (Long) redisTemplate.execute(unlockScript, keys, getLockValue());
    }

    //-------------
    private boolean tryAcquire(long leaseTime, TimeUnit unit) {
        Long update = (Long) redisTemplate.execute(lockScript, keys, getLockValue(), "NX", "PX", String.valueOf(unit.toMillis(leaseTime)));
        if (update==1) {
            return true;
        }
        return false;
    }
    //--------------

    private String getLockValue() {
        return String.format("%s:%d", requestId, Thread.currentThread().getId());
    }
}

3.RedisScriptProvider.java

/**
 * @author Ricky Fung
 */
public class RedisScriptProvider {
    private final String lockScript;
    private final String unlockScript;
    private final String holdByScript;

    private RedisScriptProvider() {
        this.lockScript = "local key = KEYS[1]\n" +
                "local lockValue = ARGV[1]\n" +
                "local milliseconds = tonumber(ARGV[4])\n" +
                "\n" +
                "local allowed_num = 0\n" +
                "local result = redis.call(\"set\", key, lockValue, ARGV[3], milliseconds, ARGV[2])\n" +
                "if result == nil then\n" +
                "    if redis.call(\"get\", key) == lockValue then\n" +
                "        allowed_num = 1\n" +
                "    end\n" +
                "else\n" +
                "    allowed_num = 1\n" +
                "end\n" +
                "\n" +
                "return allowed_num";

        this.unlockScript = "local locks_key = KEYS[1]\n" +
                "if redis.call(\"get\", locks_key) == ARGV[1] then\n" +
                "    return redis.call('del', locks_key)\n" +
                "else\n" +
                "    return 0\n" +
                "end";

        this.holdByScript = "local locks_key = KEYS[1]\n" +
                "if redis.call(\"get\", locks_key) == ARGV[1] then\n" +
                "    return 1\n" +
                "else\n" +
                "    return 0\n" +
                "end";
    }

    public static RedisScriptProvider getInstance() {
        return Singleton.INSTANCE;
    }

    public String getLockScript() {
        return lockScript;
    }

    public String getUnlockScript() {
        return unlockScript;
    }

    public String getHoldByScript() {
        return holdByScript;
    }

    private static class Singleton {
        private static final RedisScriptProvider INSTANCE = new RedisScriptProvider();
    }

}

4.DistributedLockClient

/**
 * @author Ricky Fung
 */
public interface DistributedLockClient {

    DistributedLock getLock(String lockKey);
}

5.DefaultDistributedLockClient

import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.util.Assert;

/**
 * @author Ricky Fung
 */
public class DefaultDistributedLockClient implements DistributedLockClient {

    private RedisTemplate redisTemplate;

    public DefaultDistributedLockClient(RedisTemplate redisTemplate) {
        Assert.notNull(redisTemplate, "redisTemplate is null");
        this.redisTemplate = redisTemplate;
    }

    @Override
    public DistributedLock getLock(String lockKey) {
        return new RedisDistributedLock(UUIDUtils.getCompactId(), lockKey, redisTemplate);
    }

}

使用

1.Spring Boot配置

DistributedLockConfiguration.java

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.core.StringRedisTemplate;

/**
 * @author Ricky Fung
 */
@Configuration
public class DistributedLockConfiguration {

    @Bean
    public DistributedLockClient distributedLockClient(StringRedisTemplate redisTemplate) {
        return new DefaultDistributedLockClient(redisTemplate);
    }
}

2.使用

import javax.annotation.Resource;
import java.util.concurrent.TimeUnit;

/**
 * @author Ricky Fung
 */
public class DistributedLockClientTest extends BaseSpringJUnitTest {

    @Resource(name = "distributedLockClient")
    private DistributedLockClient distributedLockClient;

    private String lockKey = String.format("%s:%s", RedisConstant.KEY_PREFIX, "lock");

    @Test
    @Ignore
    public void testLock() {
        DistributedLock lock = distributedLockClient.getLock(lockKey);
        try {
            boolean success = lock.tryLock(0, 30, TimeUnit.SECONDS);
            if (!success) {
                System.out.println("加锁失败");
                return;
            }

            try {
                //write your business code
                System.out.println("加锁成功");
            } finally {
                lock.unlock();
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

附录 - lua脚本

1. lock.lua

local key = KEYS[1]
local lockValue = ARGV[1]
local milliseconds = tonumber(ARGV[4])

local allowed_num = 0
local result = redis.call("set", key, lockValue, ARGV[3], milliseconds, ARGV[2])
if result == nil then
    if redis.call("get", key) == lockValue then
        allowed_num = 1
    end
else
    allowed_num = 1
end

return allowed_num

2.unlock.lua

local locks_key = KEYS[1]
if redis.call("get", locks_key) == ARGV[1] then
    return redis.call('del', locks_key)
else
    return 0
end

3.hold_by.lua

local locks_key = KEYS[1]
if redis.call("get", locks_key) == ARGV[1] then
    return 1
else
    return 0
end

Java 通过HttpRequest获取请求用户真实IP地址

在Java EE里,获取客户端的IP地址的方法是:request.getRemoteAddr(),这种方法在大部分情况下都是有效的。但是在通过了Apache,Squid,nginx等反向代理软件就不能获取到客户端的真实IP地址了。

如果使用了反向代理软件,将http://192.168.1.110:2046/ 的URL反向代理为 http://www.javapeixun.com.cn / 的URL时,用request.getRemoteAddr()方法获取的IP地址是:127.0.0.1 或 192.168.1.110,而并不是客户端的真实IP。

经过代理以后,由于在客户端和服务之间增加了中间层,因此服务器无法直接拿到客户端的IP,服务器端应用也无法直接通过转发请求的地址返回给客户端。但是在转发请求的HTTP头信息中,增加了X-FORWARDED-FOR信息。用以跟踪原有的客户端IP地址和原来客户端请求的服务器地址。当我们访问http://www.javapeixun.com.cn /index.jsp/ 时,其实并不是我们浏览器真正访问到了服务器上的index.jsp文件,而是先由代理服务器去访问http://192.168.1.110:2046/index.jsp ,代理服务器再将访问到的结果返回给我们的浏览器,因为是代理服务器去访问index.jsp的,所以index.jsp中通过request.getRemoteAddr()的方法获取的IP实际上是代理服务器的地址,并不是客户端的IP地址。


import org.apache.commons.lang3.StringUtils;
import javax.servlet.http.HttpServletRequest;

/**
 * @author Ricky Fung
 */
public abstract class IpUtils {
    private static final String UNKNOWN_HOST = "unknown";

    /**
     * 获取用户真实IP地址,不使用request.getRemoteAddr();的原因是有可能用户使用了代理软件方式避免真实IP地址,
     * 参考文章: http://developer.51cto.com/art/201111/305181.htm
     *
     * 可是,如果通过了多级反向代理的话,X-Forwarded-For的值并不止一个,而是一串IP值,究竟哪个才是真正的用户端的真实IP呢?
     * 答案是取X-Forwarded-For中第一个非unknown的有效IP字符串。
     *
     * 如:X-Forwarded-For:192.168.1.110, 192.168.1.120, 192.168.1.130,
     * 192.168.1.100
     *
     * 用户真实IP为: 192.168.1.110
     *
     * @param request
     * @return
     */
    public static String getIpAddress(HttpServletRequest request) {
        String ip = request.getHeader("x-forwarded-for");
        if (StringUtils.isBlank(ip) || UNKNOWN_HOST.equalsIgnoreCase(ip)) {
            ip = request.getHeader("Proxy-Client-IP");
        }
        if (StringUtils.isBlank(ip) || UNKNOWN_HOST.equalsIgnoreCase(ip)) {
            ip = request.getHeader("WL-Proxy-Client-IP");
        }
        if (StringUtils.isBlank(ip) || UNKNOWN_HOST.equalsIgnoreCase(ip)) {
            ip = request.getHeader("HTTP_CLIENT_IP");
        }
        if (StringUtils.isBlank(ip) || UNKNOWN_HOST.equalsIgnoreCase(ip)) {
            ip = request.getHeader("HTTP_X_FORWARDED_FOR");
        }
        if (StringUtils.isBlank(ip) || UNKNOWN_HOST.equalsIgnoreCase(ip)) {
            ip = request.getRemoteAddr();
        }
        return ip;
    }

}

[Stripe] Scaling your API with rate limiters

原文链接:https://stripe.com/blog/rate-limiters

Scaling your API with rate limiters

The following are examples of the four types rate limiters discussed in the accompanying blog post. In the examples below I've used pseudocode-like Ruby, so if you're unfamiliar with Ruby you should be able to easily translate this approach to other languages. Complete examples in Ruby are also provided later in this gist.

In most cases you'll want all these examples to be classes, but I've used simple functions here to keep the code samples brief.

Request rate limiter

This uses a basic token bucket algorithm and relies on the fact that Redis scripts execute atomically. No other operations can run between fetching the count and writing the new count.

The full script with a small test suite is available, but here is a sketch:

# How many requests per second do you want a user to be allowed to do?
REPLENISH_RATE = 100

# How much bursting do you want to allow?
CAPACITY = 5 * REPLENISH_RATE

SCRIPT = File.read('request_rate_limiter.lua')

def check_request_rate_limiter(user)
  # Make a unique key per user.
  prefix = 'request_rate_limiter.' + user

  # You need two Redis keys for Token Bucket.
  keys = [prefix + '.tokens', prefix + '.timestamp']

  # The arguments to the LUA script. time() returns unixtime in seconds.
  args = [REPLENISH_RATE, CAPACITY, Time.new.to_i, 1]

  begin
    allowed, tokens_left = redis.eval(SCRIPT, keys, args)
  rescue RedisError => e
    # Fail open. We don't want a hard dependency on Redis to allow traffic.
    # Make sure to set an alert so you know if this is happening too much.
    # Our observed failure rate is 0.01%.
    puts 'Redis failed: ' + e
    return
  end

  if !allowed
    raise RateLimitError.new(status_code = 429)
  end
end

Here is the corresponding request_rate_limiter.lua script:

local tokens_key = KEYS[1]
local timestamp_key = KEYS[2]

local rate = tonumber(ARGV[1])
local capacity = tonumber(ARGV[2])
local now = tonumber(ARGV[3])
local requested = tonumber(ARGV[4])

local fill_time = capacity/rate
local ttl = math.floor(fill_time*2)

local last_tokens = tonumber(redis.call("get", tokens_key))
if last_tokens == nil then
  last_tokens = capacity
end

local last_refreshed = tonumber(redis.call("get", timestamp_key))
if last_refreshed == nil then
  last_refreshed = 0
end

local delta = math.max(0, now-last_refreshed)
local filled_tokens = math.min(capacity, last_tokens+(delta*rate))
local allowed = filled_tokens >= requested
local new_tokens = filled_tokens
if allowed then
  new_tokens = filled_tokens - requested
end

redis.call("setex", tokens_key, ttl, new_tokens)
redis.call("setex", timestamp_key, ttl, now)

return { allowed, new_tokens }

Concurrent requests limiter

Because Redis is so fast, doing the naive thing works. Just add a random token to a set at the start of a request and remove it from the set when you're done. If the set is too large, reject the request.

Again the full code is available and a sketch follows:

# The maximum length a request can take
TTL = 60

# How many concurrent requests a user can have going at a time
CAPACITY = 100

SCRIPT = File.read('concurrent_requests_limiter.lua')

class ConcurrentRequestLimiter
  def check(user)
    @timestamp = Time.new.to_i

    # A string of some random characters. Make it long enough to make sure two machines don't have the same string in the same TTL.
    id = Random.new.bytes(4)
    key = 'concurrent_requests_limiter.' + user
    begin
      # Clear out old requests that probably got lost
      redis.zremrangebyscore(key, '-inf', @timestamp - TTL)
      keys = [key]
      args = [CAPACITY, @timestamp, id]
      allowed, count = redis.eval(SCRIPT, keys, args)
    rescue RedisError => e
      # Similarly to above, remember to fail open so Redis outages don't take down your site
      log.info('Redis failed: ' + e)
      return
    end

    if allowed
      # Save it for later so we can remove it when the request is done
      @id_in_redis = id
    else
      raise RateLimitError.new(status_code: 429)
    end
  end

  # Call this method after a request finishes
  def post_request_bookkeeping(user)
    if not @id_in_redis
      return
    end
    key = 'concurrent_requests_limiter.' + user
    removed = redis.zrem(key, @id_in_redis)
  end

  def do_request(user)
    check(user)

    # Do the actual work here

    post_request_bookkeeping(user)
  end
end

The content of concurrent_requests_limiter.lua is simple and is meant to guarantee the atomicity of the ZCARD and ZADD.

local key = KEYS[1]

local capacity = tonumber(ARGV[1])
local timestamp = tonumber(ARGV[2])
local id = ARGV[3]

local count = redis.call("zcard", key)
local allowed = count < capacity

if allowed then
  redis.call("zadd", key, timestamp, id)
end

return { allowed, count }

Fleet usage load shedder

We can now move from preventing abuse to adding stability to your site with load shedders. If you can categorize traffic into buckets where no fewer than X% of your workers should be available to process high-priority traffic, then you're in luck: this type of algorithm can help. We call it a load shedder instead of a rate limiter because it isn't trying to reduce the rate of a specific user's requests. Instead, it adds backpressure so internal systems can recover.

When this load shedder kicks in it will start dropping non-critical traffic. There should be alarm bells ringing and people should be working to get the traffic back, but at least your core traffic will work. For Stripe, high-priority traffic has to do with creating charges and moving money around, and low-priority traffic has to do with analytics and reporting.

The great thing about this load shedder is that its implementation is identical to the Concurrent Requests Limiter, except you don't use a user-specific key, you just use a global key.

limiter = ConcurrentRequestLimiter.new
def check_fleet_usage_load_shedder
  if is_high_priority_request
    return
  end

  begin
    return limiter.do_request('fleet_usage_load_shedder')
  rescue RateLimitError
    raise RateLimitError.new(status_code: 503)
  end
end

Worker utilization load shedder

This load shedder is the last resort, and only kicks in when a machine is under heavy pressure and needs to offload. The code for determining how many workers are in use is dependent on your infrastructure. The general outline is to figure out some measure of "Is our infrastructure currently failing?" If that function returns something non-zero, start throwing out your least important requests (after waiting a short period to allow imprecise measurements) with higher and higher probability. After a period of time doing that, move on to more requests until you are throwing out everything except for the most critical traffic.

The most important behavior for this load shedder is to slowly take action. Don't start throwing out traffic until your infrastructure has been sad for quite a while (30 seconds), and don't instantaneously add traffic back. Sharp changes in shedding amounts will cause wild swings and lead to failure modes that are hard to diagnose.

As before, the full script with a small test suite is available, and here is a sketch:

END_OF_GOOD_UTILIZATION = 0.7
START_OF_BAD_UTILIZATION = 0.8

# Assuming a sample rate of 8 seconds, so 28 == 2.5 * 8 == guaranteed 3 samples
NUMBER_OF_SECONDS_BEFORE_SHEDDING_STARTS = 28
NUMBER_OF_SECONDS_TO_SHED_ALL_TRAFFIC = 120

RESTING_SHED_AMOUNT = -NUMBER_OF_SECONDS_BEFORE_SHEDDING_STARTS / NUMBER_OF_SECONDS_TO_SHED_ALL_TRAFFIC

@shedding_amount_last_changed = 0
@shedding_amount = 0

def check_worker_utilization_load_shedder
  chance = drop_chance(current_worker_utilization)
  if chance == 0
    dropped = false
  else
    dropped = Random.rand() < chance
  end
  if dropped
    raise RateLimitError.new(status_code: 503)
  end
end

def drop_chance(utilization)
  update_shedding_amount_derivative(utilization)
  how_much_traffic_to_shed
end

def update_shedding_amount_derivative(utilization)
  # A number from -1 to 1
  amount = 0

  # Linearly reduce shedding
  if utilization < END_OF_GOOD_UTILIZATION
    amount = utilization / END_OF_GOOD_UTILIZATION - 1
  # A dead zone
  elsif utilization < START_OF_BAD_UTILIZATION
    amount = 0
  # Shed traffic
  else
    amount = (utilization - START_OF_BAD_UTILIZATION) / (1 - START_OF_BAD_UTILIZATION)
  end

  # scale the derivative so we take time to shed all the traffic
  @shedding_amount_derivative = clamp(amount, -1, 1) / NUMBER_OF_SECONDS_TO_SHED_ALL_TRAFFIC
end

def how_much_traffic_to_shed
  now = Time.now().to_f
  seconds_since_last_math = clamp(now - @shedding_amount_last_changed, 0, NUMBER_OF_SECONDS_BEFORE_SHEDDING_STARTS)
  @shedding_amount_last_changed = now
  @shedding_amount += seconds_since_last_math * @shedding_amount_derivative
  @shedding_amount = clamp(@shedding_amount, RESTING_SHED_AMOUNT, 1)
end

def current_worker_utilization
  # Returns a double from 0 to 1.
  # 1 means every process is busy, .5 means 1/2 the processes are working, and 0 means the machine is servicing 0 requests
  # This is infra dependent on how to read this value
end

def clamp(val, min, max)
  if val < min
    return min
  elsif val > max
    return max
  else
    return val
  end
end

Mybatis 3.x:批量操作

依赖

<dependency>
    <groupId>org.mybatis</groupId>
    <artifactId>mybatis</artifactId>
    <version>3.4.6</version>
</dependency>

<dependency>
    <groupId>org.mybatis</groupId>
    <artifactId>mybatis-spring</artifactId>
    <version>1.3.2</version>
</dependency>

配置

主要使用

<foreach collection="list" item="item" open="(" close=")" separator="," index="index">
   #{item.xx}, #{item.xx}
</foreach>

说明:

collection="list"    其中list是固定的,如果是数组就是array
item="item"         循环中每一项的别名
open=""             开始标识,比如删除in (id1,id2), open="(" close=")"
close=""            结束标识
separator=","       分隔符号
index="index"       下标值

批量insert

<!-- 批量保存用户,并返回每个用户插入的ID -->
<insert id="batchSave" parameterType="java.util.List" useGeneratedKeys="true" keyProperty="id">
    INSERT INTO `tb_user`(`username`, age)
    VALUES
    <foreach collection="list" item="item" separator=",">
        (#{item.username}, #{item.age})
    </foreach>
</insert>

批量删除

<!-- 批量删除用户 -->
<delete id="batchDelete" parameterType="java.util.List">
    DELETE FROM `tb_user` WHERE id IN
    <foreach collection="list" item="item" open="(" close=")" separator=",">
        #{id}
    </foreach>
</delete>

UserMapper.xml

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd" >
<mapper namespace="com.demo.dao.UserMapper">
    <resultMap id="BaseResultMap" type="com.demo.model.UserDO">
        <id column="id" jdbcType="BIGINT" property="id" />
        <result column="user_id" jdbcType="BIGINT" property="userId" />
        <result column="age" jdbcType="INTEGER" property="age" />
        <result column="name" jdbcType="VARCHAR" property="name" />
        <result column="total_amount" jdbcType="DECIMAL" property="totalAmount" />
        <result column="create_time" jdbcType="TIMESTAMP" property="createTime" />
        <result column="update_time" jdbcType="TIMESTAMP" property="updateTime" />
    </resultMap>

    <sql id="Base_Column_List">
        id, age, user_id, name, total_amount, create_time, update_time
    </sql>

    <select id="selectByPrimaryKey" parameterType="java.lang.Long" resultMap="BaseResultMap">
        select
        <include refid="Base_Column_List"/>
        from t_user
        where id = #{id}
    </select>

    <insert id="insert" parameterType="com.demo.model.UserDO" useGeneratedKeys="true" keyProperty="id">
        insert into t_user (
        activity_id, 
        user_id,
        name,
        total_amount,
        create_time,
        update_time)
        values (
        #{activityId}, 
        #{userId},
        #{name},
        #{totalAmount},
        now(),
        now())
    </insert>

    <!-- 批量保存用户,并返回每个用户插入的ID -->
    <insert id="batchInsert" parameterType="java.util.List" useGeneratedKeys="true" keyProperty="id">
        INSERT INTO `t_user`(user_id, `name`, total_amount, create_time, update_time)
        VALUES
        <foreach collection="list" item="item" separator=",">
            (#{item.userId}, #{item.name}, #{item.totalAmount}, now(), now())
        </foreach>
    </insert>

    <delete id="batchDelete" parameterType="java.util.List">
        DELETE FROM `t_user` WHERE id IN
        <foreach collection="list" item="item" open="(" close=")" separator=",">
            #{id}
        </foreach>
    </delete>
</mapper>

UserMapper.java

/**
 * @author Ricky Fung
 */
public interface UserMapper {

    UserDO selectByPrimaryKey(Long id);

    int insert(UserDO record);

    int batchInsert(List<UserDO> list);

    int batchDelete(List<Long> idList);
}

答题系统设计

表结构设计

题目表:

CREATE TABLE `questions` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键',
  `title` varchar(255) NOT NULL COMMENT '题目内容',
  `mode` smallint(10) NOT NULL COMMENT '模式,0:单选,1:多选',
  `option_count` smallint(10) NOT NULL COMMENT '候选项数量',
  `answer_items` varchar(20) NOT NULL COMMENT '正确答案序号列表',
  `create_time` datetime NOT NULL COMMENT '创建时间',
  `update_time` datetime NOT NULL COMMENT '更新时间',
  PRIMARY KEY (`id`),
  KEY `idx_user_id` (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='答题题库表';

答案候选项表:

CREATE TABLE `answer_options` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键',
  `question_id` bigint(20) unsigned NOT NULL COMMENT '关联题库id',
  `seq` int(10) NOT NULL COMMENT '序号',
  `content` varchar(40) NOT NULL COMMENT '内容',
  `create_time` datetime NOT NULL COMMENT '创建时间',
  `update_time` datetime NOT NULL COMMENT '更新时间',
  PRIMARY KEY (`id`),
  KEY `idx_question_id` (`question_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='答题候选项表';

数据库 schema 迁移数据最佳实践

如何进行大规模在线数据迁移

工程团队常面临一项共同挑战:重新设计数据模型以支持清晰准确的抽象和更复杂的功能。这意味着,在生产环境中,需要迁移数以百万计的活跃数据对象,并且重构上千行代码。

用户期望 Stripe API 保障可用性和一致性。所以在进行迁移时,需要格外谨慎,必须保证数据的数值正确无误,并且 Stripe 的服务始终保持可用。

本文将展示国外移动支付服务商 Stripe 如何安全地对数以亿计的 Subscriptions(订阅服务)对象进行大规模迁移。

引用

分享页链接防篡改方案

场景

最近业务场景中需要让用户分享某个页面给自己的微信好友,让好友加油助力。

傻瓜方案

举个栗子,我们现在要分享出去的页面链接是 http://abc.com/share.jsp,为了能标示发起分享的用户信息、统计渠道信息等,需要在链接后附上用户id,例如http://abc.com/share.jsp?uid=123&utm_source=wechat。
如果把这个链接 就这样分享出去的话,存在被恶意攻击的风险。比如攻击者遍历 用户id 逐一请求,java代码如下:

for (int i=0; i<1000000; i++) {
     String url = "http://abc.com/share.jsp?uid="+ i +"&utm_source=wechat";
}

防篡改

如何保证链接不会被篡改了?答案就是对链接后面的参数计算签名,当打开分享链接时 服务端先校验签名是否正确,正确的情况下给用户展示信息,否则提示非法链接。

我们在要分享的链接 http://abc.com/share.jsp?uid=123&utm_source=wechat 后面再增加一个sign,如:http://abc.com/share.jsp?uid=123&utm_source=wechat&sign=sdfwerwerwerwe。

如何计算签名

生成分享链接:

String sign = SignUtils.genSign(MapUtils.buildParamMap(String.valueOf(userId), utmSource), secret);

secret是密钥,不对外暴露。

验证签名:

Map<String, String> param = MapUtils.buildParamMap(userId, utmSource);
boolean match =  SignUtils.checkSign(param, secret, sign);

SignUtils.java

import java.util.*;

/**
 * @author Ricky Fung
 */
public abstract class SignUtils {

    /**
     * 计算签名
     * @param param
     * @param secret
     * @return
     */
    public static String genSign(Map<String, String> param, String secret) {
        Map<String, String> map = new TreeMap<String, String>(
                new Comparator<String>() {
                    public int compare(String o1, String o2) {
                        // 降序排序
                        return o1.compareTo(o2);
                    }
                });
        //全部加入
        map.putAll(param);

        Set<String> keySet = map.keySet();
        Iterator<String> it = keySet.iterator();
        StringBuilder sb = new StringBuilder(256);
        while (it.hasNext()) {
            String key = it.next();
            sb.append(key).append("=").append(map.get(key));
        }
        //添加secret
        sb.append(secret);
        String str = sb.toString();
        return DigestUtils.md5Hex(str);
    }

    /**
     * 校验签名
     * @param param
     * @param secret
     * @param sign
     * @return
     */
    public static boolean checkSign(Map<String, String> param, String secret, String sign) {
        String origin = genSign(param, secret);
        return origin.equals(sign);
    }
}

BTrace使用简介

背景

生产环境中可能出现各种问题,但是这些问题又不是程序error导致的,可能是逻辑性错误,这时候需要获取程序运行时的数据信息,如方法参数、返回值来定位问题,通过传统的增加日志记录的方式非常繁琐,而且需要重启server,代价很大。BTrace应运而生,可以动态地跟踪java运行程序,将跟踪字节码注入到运行类中,对运行代码侵入较小,对性能上的影响可以忽略不计。

BTrace简介

BTrace是Java的安全可靠的动态跟踪工具。 它的工作原理是通过 instrument + asm 来对正在运行的java程序中的class类进行动态增强。也就是说btrace可以在Java程序运行时,动态地向目标应用程序的字节码注入追踪代码。

说他是安全可靠的,是因为它对正在运行的程序是只读的。也就是说,他可以插入跟踪语句来检测和分析运行中的程序,不允许对其进行修改。因此他存在一些限制:

  • 不能创建对象
  • 不能创建数组
  • 不能抛出和捕获异常
  • 不能调用任何对象方法和静态方法
  • 不能给目标程序中的类静态属性和对象的属性进行赋值
  • 不能有外部、内部和嵌套类
  • 不能有同步块和同步方法
  • 不能有循环(for, while, do..while)
  • 不能继承任何的类
  • 不能实现接口
  • 不能包含assert断言语句

这些限制其实是可以使用unsafe模式绕过。通过在BTrace脚本中声明 @btrace(unsafe = true) 注解 ,并且使用 -u 选项,指定以 unsafe 模式运行btrace即可。

参考资料

邀请码生成算法

业务场景

某些App上有新用户邀请机制,新用户注册的时候输入邀请码后注册 后台会有优惠发放。

分析

此场景下要求生成的邀请码是全局唯一的,即每个用户对应的邀请码都是唯一。

我们可以转换一下思路:每个用户都有唯一的uid,可以将uid通过某种算法生成邀请码。

实现代码

/**
 * @author Ricky Fung
 */
@Service
public class InvitationCodeService {
    private final Logger logger = LoggerFactory.getLogger(this.getClass());

    private final char[] dict = new char[]{'1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd',
            'e', 'f', 'g', 'h', 'j', 'k', 'm', 'n', 'p', 'q', 'r', 's', 't',
            'u', 'v', 'w', 'x', 'y', 'z'};

    private static final int DICT_BIT_LENGTH = 5;

    private static final int DICT_MASK = (1 << DICT_BIT_LENGTH) - 1;

    /**
     * 纪元, 2018-08-01 00:00:00
     */
    private static final long epoch = new DateTime(2018, 8, 1, 0, 0, 0).getMillis();

    /**
     * 时间戳位数
     */
    private static final int TIMESTAMP_BIT_LENGTH = 34;

    private static final long TIMESTAMP_MASK = (1L << TIMESTAMP_BIT_LENGTH) - 1;

    /**
     * 随机数
     */
    private static final int RANDOM_BIT_LENGTH = 3;
    private static final long RANDOM_MASK = (1L << RANDOM_BIT_LENGTH) - 1;

    /**
     * 随机数左移位数
     */
    private static final int RANDOM_LEFT_SHIFT_BITS = TIMESTAMP_BIT_LENGTH;
    /**
     * userId 左移位数
     */
    private static final int USER_LEFT_SHIFT_BITS = TIMESTAMP_BIT_LENGTH + RANDOM_BIT_LENGTH;

    /**
     * 所能支持的最大user_id
     */
    private static final long MAX_USER_ID = (1L << (63 - USER_LEFT_SHIFT_BITS)) - 1;

    private static final int CODE_LENGTH = 13;

    /**
     * 不足则填充 零
     */
    private static final char PADDING_CHAR = '0';

    /**
     * 生成招财码
     * @param userId
     * @return
     */
    public String genCode(long userId) {
        String code = getUniqueCode(userId);
        return padding(code, PADDING_CHAR, CODE_LENGTH);
    }

    public String getUniqueCode(long userId) {
        //1.得到一个long值
        long num = getUniqueNum(userId).getUid();
        //2.转换为字符串
        StringBuilder sb = new StringBuilder(CODE_LENGTH);
        while (num != 0) {
            int i = (int) (num & DICT_MASK);
            sb.append(dict[i]);
            num = num >> DICT_BIT_LENGTH;
        }
        StringBuilder rs = sb.reverse();
        return rs.toString();
    }

    /**
     * 产生一个64bit的正整数
     * @param userId
     * @return
     */
    public FortuneUidVO getUniqueNum(long userId) {
        //bugfix: 修复超出范围后死循环导致OOM
        if (userId> MAX_USER_ID) {
            throw new IllegalArgumentException(String.format("userId:%s 超出最大id: %s范围了", userId, MAX_USER_ID));
        }
        //
        long millis = System.currentTimeMillis();
        long thread = Thread.currentThread().getId();
        //
        long uid = (userId << USER_LEFT_SHIFT_BITS | ((thread & RANDOM_MASK) << RANDOM_LEFT_SHIFT_BITS) | ((millis - epoch) & TIMESTAMP_MASK));
        return new FortuneUidVO(userId, thread, epoch, millis, uid);
    }

    private String padding(String str, char padding, int length) {
        if (str.length() >= length) {
            return str;
        }
        StringBuilder sb = new StringBuilder(length);
        if (str.length() < length) {    //填充
            int dist = length - str.length();
            for (int i=0; i<dist; i++) {
                sb.append(padding);
            }
        }
        sb.append(str);
        return sb.toString();
    }

}

【源码剖析】Spring AOP实现原理

1、解析配置

2、生成代理

上篇文章说了,org.springframework.aop.aspectj.autoproxy.AspectJAwareAdvisorAutoProxyCreator这个类是Spring提供给开发者的AOP的核心类,就是AspectJAwareAdvisorAutoProxyCreator完成了【类/接口-->代理】的转换过程,首先我们看一下AspectJAwareAdvisorAutoProxyCreator的层次结构:

image

这里最值得注意的一点是最左下角的那个方框,我用几句话总结一下:

  • AspectJAwareAdvisorAutoProxyCreator是BeanPostProcessor接口的实现类
  • postProcessBeforeInitialization方法与postProcessAfterInitialization方法实现在父类AbstractAutoProxyCreator中
  • postProcessBeforeInitialization方法是一个空实现
  • 逻辑代码在postProcessAfterInitialization方法中

基于以上的分析,将Bean生成代理的时机已经一目了然了:在每个Bean初始化之后,如果需要,调用AspectJAwareAdvisorAutoProxyCreator中的postProcessBeforeInitialization为Bean生成代理

判断是否为生成代理

上文分析了Bean生成代理的时机是在每个Bean初始化之后,下面把代码定位到Bean初始化之后,先是AbstractAutowireCapableBeanFactory的initializeBean方法进行初始化:

protected Object initializeBean(final String beanName, final Object bean, RootBeanDefinition mbd) {
    if (System.getSecurityManager() != null) {
        AccessController.doPrivileged(new PrivilegedAction<Object>() {
            public Object run() {
                invokeAwareMethods(beanName, bean);
                return null;
            }
        }, getAccessControlContext());
    }
    else {
        invokeAwareMethods(beanName, bean);
    }

    Object wrappedBean = bean;
    if (mbd == null || !mbd.isSynthetic()) {
        wrappedBean = applyBeanPostProcessorsBeforeInitialization(wrappedBean, beanName);
    }

    try {
    invokeInitMethods(beanName, wrappedBean, mbd);
    }
    catch (Throwable ex) {
        throw new BeanCreationException(
                (mbd != null ? mbd.getResourceDescription() : null),
                beanName, "Invocation of init method failed", ex);
    }

    if (mbd == null || !mbd.isSynthetic()) {
        wrappedBean = applyBeanPostProcessorsAfterInitialization(wrappedBean, beanName);
    }
    return wrappedBean;
}

重点关注:applyBeanPostProcessorsBeforeInitialization方法,初始化之后 会执行 applyBeanPostProcessorsAfterInitialization方法:

public Object applyBeanPostProcessorsAfterInitialization(Object existingBean, String beanName)
        throws BeansException {

    Object result = existingBean;
    for (BeanPostProcessor beanProcessor : getBeanPostProcessors()) {
        result = beanProcessor.postProcessAfterInitialization(result, beanName);
        if (result == null) {
            return result;
        }
    }
    return result;
}

这里调用每个BeanPostProcessor的postProcessBeforeInitialization方法。按照之前的分析,看一下AbstractAutoProxyCreator的postProcessAfterInitialization方法实现:

public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
    if (bean != null) {
        Object cacheKey = getCacheKey(bean.getClass(), beanName);
        if (!this.earlyProxyReferences.contains(cacheKey)) {
            return wrapIfNecessary(bean, beanName, cacheKey);
        }
    }
    return bean;
}

这里的核心就是 wrapIfNecessary 方法:

	protected Object wrapIfNecessary(Object bean, String beanName, Object cacheKey) {
		if (StringUtils.hasLength(beanName) && this.targetSourcedBeans.contains(beanName)) {
			return bean;
		}
		if (Boolean.FALSE.equals(this.advisedBeans.get(cacheKey))) {
			return bean;
		}
		if (isInfrastructureClass(bean.getClass()) || shouldSkip(bean.getClass(), beanName)) {
			this.advisedBeans.put(cacheKey, Boolean.FALSE);
			return bean;
		}

		// Create proxy if we have advice.
		Object[] specificInterceptors = getAdvicesAndAdvisorsForBean(bean.getClass(), beanName, null);
		if (specificInterceptors != DO_NOT_PROXY) {
			this.advisedBeans.put(cacheKey, Boolean.TRUE);
			Object proxy = createProxy(
					bean.getClass(), beanName, specificInterceptors, new SingletonTargetSource(bean));
			this.proxyTypes.put(cacheKey, proxy.getClass());
			return proxy;
		}

		this.advisedBeans.put(cacheKey, Boolean.FALSE);
		return bean;
	}

首先我们要思考的第一个问题是:哪些目标对象需要生成代理?因为配置文件里面有很多Bean,肯定不能对每个Bean都生成代理,因此需要一套规则判断Bean是不是需要生成代理,这套规则就是第14行的代码getAdvicesAndAdvisorsForBean:

	@Override
	@Nullable
	protected Object[] getAdvicesAndAdvisorsForBean(
			Class<?> beanClass, String beanName, @Nullable TargetSource targetSource) {

		List<Advisor> advisors = findEligibleAdvisors(beanClass, beanName);
		if (advisors.isEmpty()) {
			return DO_NOT_PROXY;
		}
		return advisors.toArray();
	}

参考资料

开发必备-常用SQL

1.建库

1.语法

CREATE {DATABASE | SCHEMA} [IF NOT EXISTS] db_name
[create_specification [, create_specification] ...]
create_specification:
[DEFAULT] CHARACTER SET charset_name
| [DEFAULT] COLLATE collation_name

2.示例

CREATE DATABASE `test` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;

utf8mb4:

CREATE DATABASE `test` DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci ;

2.建表

1.语法

2.示例

CREATE TABLE `operation_user_investment_detail` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键',
`activity_id` bigint(20) NOT NULL COMMENT '活动id',
`user_id` bigint(20) unsigned NOT NULL COMMENT '用户id',
`name` varchar(40) NOT NULL COMMENT '用户真实姓名',
`buy_amount` decimal(12,2) NOT NULL COMMENT '购买理财计划金额',
`buy_record_detail_id` bigint(20) unsigned NOT NULL COMMENT '购买理财计划详情id',
`buy_time` datetime DEFAULT NULL COMMENT '购买时间',
`create_time` datetime DEFAULT NULL COMMENT '创建时间',
`update_time` datetime DEFAULT NULL COMMENT '更新时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_buy_record_id` (`buy_record_detail_id`, `activity_id`),
KEY `idx_activity_id_user_id` (`activity_id`, `user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='活动后台用户投资记录表';

utf8mb4:

CREATE TABLE `operation_user_investment_detail` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键',
`activity_id` bigint(20) NOT NULL COMMENT '活动id',
`user_id` bigint(20) unsigned NOT NULL COMMENT '用户id',
`name` varchar(40) NOT NULL COMMENT '用户真实姓名',
`buy_amount` decimal(12,2) NOT NULL COMMENT '购买理财计划金额',
`buy_record_detail_id` bigint(20) unsigned NOT NULL COMMENT '购买理财计划详情id',
`buy_time` datetime DEFAULT NULL COMMENT '购买时间',
`create_time` datetime DEFAULT NULL COMMENT '创建时间',
`update_time` datetime DEFAULT NULL COMMENT '更新时间',
PRIMARY KEY (`id`),
UNIQUE KEY `uniq_buy_record_id` (`buy_record_detail_id`, `activity_id`),
KEY `idx_activity_id_user_id` (`activity_id`, `user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='活动后台用户投资记录表';

3.添加字段

1. 语法

ALTER TABLE table_name ADD COLUMN new_column_name varchar(20) not null;

ALTER TABLE table_name ADD COLUMN new_column_name varchar(20) not null AFTER column_name;

2. 示例

增加一列:

ALTER TABLE `t_user` ADD COLUMN age varchar(20) not null;

增加指定的一列:

ALTER TABLE `t_user` ADD COLUMN age varchar(20) not null AFTER gender;

4.添加索引

1.语法

//普通索引
ALTER TABLE `table_name` ADD INDEX index_name(`column1`,`column2`,`column3`)
//唯一索引
ALTER TABLE `table_name` ADD UNIQUE INDEX index_name(`column1`,`column2`,`column3`)

2.示例

普通索引:

ALTER TABLE `operation_user_coin` ADD INDEX idx_activity_id (`activity_id`) ;

唯一索引:

ALTER TABLE `operation_user_coin` ADD UNIQUE INDEX uniq_activity_user(`coin_id`,`user_id`);

5.删除索引

1.语法

DROP INDEX index_name ON table_name;

ALTER TABLE table_name DROP index index_name;

2.示例

ALTER TABLE `wb_blog` DROP INDEX idx_user; 

mac git xcrun error active developer path 错误

上周五升级了一下MBP系统到最新到Mojave,结果今天上班后发现git使不了,异常如下:

appledeMacBook-Pro-181:~ apple$ git status
xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun

我本机之前安装过XCode,于是执行:

sudo xcode-select -switch /Applications/Xcode.app/Contents/Developer

如果未安装过,可执行如下命令:

xcode-select --install

回车后,系统弹出下载xcode,点击确认,下载完成后即可(注:需要有VPN)。

签到系统设计

业务场景

签到活动对于提高APP用户粘性是非常有帮助的。相信大家手机里面都是有京东、淘宝等等APP,大部分的APP都是有一个签到功能的,用户通过签到的形式,得到相对应的奖励,能够提高APP用户活跃度。

1.表结构设计

1.用户表

CREATE TABLE `user` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键',
  `nickname` varchar(45) NOT NULL COMMENT '用户昵称',
  `name` varchar(45) NOT NULL DEFAULT '' COMMENT '用户真实姓名',
  `gender` tinyint(2) unsigned NOT NULL DEFAULT '0' COMMENT '性别, 0:男, 1:女, 2:未知',
  `mobile` varchar(20) NOT NULL DEFAULT '' COMMENT '密码',
  `version` int(10) unsigned NOT NULL DEFAULT 1 COMMENT '版本号',
  `create_time` datetime DEFAULT NULL COMMENT '创建时间',
  `update_time` datetime DEFAULT NULL COMMENT '更新时间',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='用户表';

2.用户签到表:

CREATE TABLE `user_sign_in` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键',
  `user_id` bigint(20) unsigned NOT NULL COMMENT '用户id',
  `nickname` varchar(45) NOT NULL COMMENT '用户昵称',
  `last_sign_in_date` int(10) unsigned NOT NULL COMMENT '最近一次签到日期,格式: yyyyMMdd',
  `continuous_sign_in_days` int(10) unsigned NOT NULL COMMENT '连续签到天数',
  `version` int(10) unsigned NOT NULL DEFAULT 1 COMMENT '版本号',
  `create_time` datetime NOT NULL COMMENT '创建时间',
  `update_time` datetime NOT NULL COMMENT '更新时间',
  PRIMARY KEY (`id`),
  KEY `idx_user_id` (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='用户签到表';

3.用户签到log表:

CREATE TABLE `user_sign_in_log` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键',
  `user_id` bigint(20) unsigned NOT NULL COMMENT '用户id',
  `sign_in_date` int(10) unsigned NOT NULL COMMENT '签到日期,格式: yyyyMMdd',
  `create_time` datetime NOT NULL COMMENT '创建时间',
  `update_time` datetime NOT NULL COMMENT '更新时间',
  PRIMARY KEY (`id`),
  UNIQUE KEY `uniq_user_id_sign_date` (`user_id`, `sign_in_date`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='用户签到log表';

核心逻辑

SignInService.java

package io.mindflow.architect.service;

import io.mindflow.architect.mapper.demo.UserMapper;
import io.mindflow.architect.model.demo.UserDO;
import io.mindflow.architect.model.demo.UserSignInDO;
import io.mindflow.architect.model.demo.UserSignInLogDO;
import io.mindflow.architect.util.DateUtils;
import io.mindflow.architect.util.Ints;
import io.mindflow.architect.web.vo.ApiResult;
import io.mindflow.architect.web.vo.UserSignInResultVO;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import javax.annotation.Resource;
import java.util.Date;

/**
 * @author Ricky Fung
 */
@Service
public class SignInService {
    private final Logger logger = LoggerFactory.getLogger(this.getClass());

    @Autowired
    private UserMapper userMapper;

    @Resource(name = "signInTxService")
    private SignInTxService signInTxService;

    public ApiResult<UserSignInResultVO> doSignIn(Long userId, Date now) {
        UserDO userDO = userMapper.selectUserById(userId);
        if (userDO==null) {
            return ApiResult.buildFailureResult(1000, "用户不存在");
        }

        Integer signInDate = Integer.parseInt(DateUtils.format(now, DateUtils.DATE_COMPACT_FORMAT));
        try {
            UserSignInDO userSignInDO = signInTxService.queryUserSignInRecord(userId);
            Integer continuousSignInDays;
            if (userSignInDO==null) {
                //第一次签到
                UserSignInDO signInRecord = signInTxService.createUserSignInRecord(userDO, signInDate, now);
                UserSignInLogDO signInLogRecord = signInTxService.createUserSignInLogRecord(userId, signInDate, now);

                //保存
                signInTxService.saveSignIn(userId, signInRecord, signInLogRecord);
                continuousSignInDays = Ints.ONE;
                logger.info("用户签到服务-签到, userId:{}, signIdDate:{} 第一次签到成功", userId, signInDate);
            } else {
                if (userSignInDO.getLastSignInDate().intValue() == signInDate) {
                    return ApiResult.buildFailureResult(1001, "今日已签到");
                }

                boolean continuousSignIn = true;
                //计算时间差
                Date lastSignDate = DateUtils.parseDate(userSignInDO.getLastSignInDate().toString(), DateUtils.DATE_COMPACT_FORMAT);
                if (Math.abs(DateUtils.daysBetween(lastSignDate, now)) > Ints.ONE) {
                    continuousSignIn = false;
                }
                UserSignInDO signInRecord = signInTxService.createUserSignInRecordV1(userSignInDO, continuousSignIn, signInDate, now);
                UserSignInLogDO signInLogRecord = signInTxService.createUserSignInLogRecord(userId, signInDate, now);
                //保存
                signInTxService.saveSignIn(userId, signInRecord, signInLogRecord);
                continuousSignInDays = signInRecord.getContinuousSignInDays();
                logger.info("用户签到服务-签到, userId:{}, signIdDate:{} continuousSignIn:{}, continuousSignInDays:{} 签到成功",
                        userId, signInDate, continuousSignIn, continuousSignInDays);
            }
            UserSignInResultVO resultVO = new UserSignInResultVO();
            resultVO.setContinuousSignInDays(continuousSignInDays);
            return ApiResult.buildSuccessResult(resultVO);
        } catch (Exception e) {
            logger.error("用户签到服务-签到接口异常, userId:{}, signInDate:{}", userId, signInDate, e);
        }
        return ApiResult.buildSystemErrorResult();
    }
}

SignInTxService.java

package io.mindflow.architect.service;

import io.mindflow.architect.mapper.demo.UserSignInLogMapper;
import io.mindflow.architect.mapper.demo.UserSignInMapper;
import io.mindflow.architect.model.demo.UserDO;
import io.mindflow.architect.model.demo.UserSignInDO;
import io.mindflow.architect.model.demo.UserSignInLogDO;
import io.mindflow.architect.util.Ints;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Transactional;

import java.util.Date;

/**
 * @author Ricky Fung
 */
@Component
public class SignInTxService {
    @Autowired
    private UserSignInMapper signInMapper;

    @Autowired
    private UserSignInLogMapper signInLogMapper;

    @Transactional
    public int saveSignIn(Long userId, UserSignInDO signInRecord, UserSignInLogDO signInLogRecord) {
        if (signInRecord.getId()==null) {
            signInMapper.insert(signInRecord);
        } else {
            signInMapper.updateByPrimaryKeySelective(signInRecord);
        }

        return signInLogMapper.insert(signInLogRecord);
    }

    //-------
    public UserSignInDO createUserSignInRecordV1(UserSignInDO userSignInDO, boolean continuousSignIn, Integer signInDate, Date now) {
        UserSignInDO signInDO = new UserSignInDO();
        //主键id
        signInDO.setId(userSignInDO.getId());

        signInDO.setLastSignInDate(signInDate);
        if (continuousSignIn) {
            //连续签到
            signInDO.setContinuousSignInDays(userSignInDO.getContinuousSignInDays() + Ints.ONE);
        } else {
            signInDO.setContinuousSignInDays(Ints.ONE);
        }

        signInDO.setVersion(userSignInDO.getVersion());
        signInDO.setUpdateTime(now);
        return signInDO;
    }

    public UserSignInDO createUserSignInRecord(UserDO userDO, Integer signIdDate, Date now) {
        UserSignInDO signInDO = new UserSignInDO();
        signInDO.setUserId(userDO.getId());
        signInDO.setNickname(userDO.getNickname());
        signInDO.setLastSignInDate(signIdDate);
        signInDO.setContinuousSignInDays(Ints.ONE);
        signInDO.setVersion(Ints.ONE);
        signInDO.setCreateTime(now);
        signInDO.setUpdateTime(now);
        return signInDO;
    }

    public UserSignInLogDO createUserSignInLogRecord(Long userId, Integer signIdDate, Date now) {
        UserSignInLogDO signInLogDO = new UserSignInLogDO();
        signInLogDO.setUserId(userId);
        signInLogDO.setSignInDate(signIdDate);
        signInLogDO.setCreateTime(now);
        signInLogDO.setUpdateTime(now);
        return signInLogDO;
    }

    //-------
    public UserSignInDO queryUserSignInRecord(Long userId) {
        return signInMapper.selectByUserId(userId);
    }

    public UserSignInLogDO queryUserSignInRecord(Long userId, Integer signIdDate) {
        return signInLogMapper.selectByUserIdAndDate(userId, signIdDate);
    }

}

UserSignInMapper.xml

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd" >
<mapper namespace="io.mindflow.architect.mapper.demo.UserSignInMapper">
    <resultMap id="BaseResultMap" type="io.mindflow.architect.model.demo.UserSignInDO">
        <id column="id" jdbcType="BIGINT" property="id" />
        <id column="user_id" jdbcType="BIGINT" property="userId" />
        <result column="nickname" jdbcType="VARCHAR" property="nickname" />
        <result column="last_sign_in_date" jdbcType="INTEGER" property="lastSignInDate" />
        <result column="continuous_sign_in_days" jdbcType="INTEGER" property="continuousSignInDays" />
        <result column="version" jdbcType="INTEGER" property="version" />
        <result column="create_time" jdbcType="TIMESTAMP" property="createTime" />
        <result column="update_time" jdbcType="TIMESTAMP" property="updateTime" />
    </resultMap>

    <sql id="Base_Column_List">
        id, user_id, nickname, last_sign_in_date, continuous_sign_in_days, version, create_time, update_time
    </sql>

    <select id="selectByUserId" resultMap="BaseResultMap">
        SELECT
        <include refid="Base_Column_List"></include>
        FROM `user_sign_in`
        WHERE  user_id = #{userId}
    </select>

    <insert id="insert" parameterType="io.mindflow.architect.model.demo.UserSignInDO" useGeneratedKeys="true" keyProperty="id">
        insert into `user_sign_in` (
        user_id,
        nickname,
        last_sign_in_date,
        continuous_sign_in_days,
        version,
        create_time,
        update_time)
        values (
        #{userId},
        #{nickname},
        #{lastSignInDate},
        #{continuousSignInDays},
        #{version},
        #{createTime},
        #{updateTime})
    </insert>

    <update id="updateByPrimaryKeySelective" parameterType="io.mindflow.architect.model.demo.UserSignInDO">
        update `user_sign_in`
        <set>
            <if test="userId != null">
                user_id = #{userId},
            </if>
            <if test="nickname != null">
                nickname = #{nickname},
            </if>
            <if test="lastSignInDate != null">
                last_sign_in_date = #{lastSignInDate},
            </if>
            <if test="continuousSignInDays != null">
                continuous_sign_in_days = #{continuousSignInDays},
            </if>
            version = version + 1,
            update_time = now()
        </set>
        where id = #{id}
        AND version = #{version}
    </update>

</mapper>

排行榜系统设计

场景

在互金领域中公司为了拉动用户投资,推出投资擂台赛活动(活动期间用户的总投资金额PK),在游戏领域中会有玩家等级排行榜,记步软件如 微信运动/支付宝-运动 中行走步数排行榜。

设计思路

说到排行榜就不得不说Redis 提供的 有序集合SortedSet数据结构。

Redis 有序集合和集合一样也是string类型元素的集合且不允许重复的成员,不同的是每个元素都会关联一个double类型的分数。redis通过分数来为集合中的成员进行从小到大的排序。

简而言之,一共三步:

  1. 通过redis ZADD key score member命令添加用户步数到SortedSet;
  2. 通过 redis ZRANK key member 命令获取某个用户在排行榜中到名次;
  3. 通过redis ZRANGE key start stop [WITHSCORES] 命令获取Top N用户列表。

代码实现

本篇以 微信运动中的步数排行榜为例 进行讲解。

maven依赖

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>1.5.6.RELEASE</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>
    
    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
    
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-redis</artifactId>
        </dependency>
    </dependencies>

1.更新用户步数

    /**
     * 更新用户步数
     * @param user
     * @param date
     * @param step
     */
    public void updateUserStep(User user, Date date, int step) {
        String key = getRankingKey(date);
        //计算score,为了让步数大的排在前面
        double score = MAX_STEP - step;
        stringRedisTemplate.opsForZSet().add(key, serializeUser(user), score);
    }

2.获取用户在排行榜中的排名

    /**
     * 获取用户在排行榜中的排名
     * @param user
     * @param date
     */
    public long getUserRanking(User user, Date date) {
        String key = getRankingKey(date);
        return stringRedisTemplate.opsForZSet().rank(key, serializeUser(user)).longValue() + 1;
    }

3.获取排行榜列表

    /**
     * 获取排行榜列表
     * @param date
     * @param num
     * @return
     */
    public List<RankingItem> getTopN(Date date, int num) {
        String key = getRankingKey(date);
        Set<ZSetOperations.TypedTuple<String>> tuples = stringRedisTemplate.opsForZSet().rangeWithScores(key, 0, num);
        if (CollectionUtils.isEmpty(tuples)) {
            return Collections.emptyList();
        }
        List<RankingItem> rankingList = new ArrayList<>(tuples.size());
        for (ZSetOperations.TypedTuple<String> tuple: tuples) {

            RankingItem item = new RankingItem();
            User user = deserializeUser(tuple.getValue());
            item.setUserId(user.getId());
            item.setNickname(user.getNickname());
            //计算用户步数
            Double score = tuple.getScore();
            item.setStep(MAX_STEP - score.intValue());

            rankingList.add(item);
        }
        return rankingList;
    }

4. 单元测试

/**
 * @author Ricky Fung
 */
public class WechatStepRankingServiceTest extends BaseSpringJUnitTest {

    @Resource(name = "wechatStepRankingService")
    private WechatStepRankingService wechatStepRankingService;

    @Test
    public void testUpdateRanking() {
        Date now = new Date();
        int step = 2000;
        for (int i=0; i<100; i++) {
            User user = new User();
            user.setId(Long.valueOf(i));
            user.setNickname("ws"+i);
            wechatStepRankingService.updateUserStep(user, now, step+i);
        }
    }

    @Test
    public void testRankingList() {
        Date now = new Date();
        List<RankingItem> list = wechatStepRankingService.getTopN(now, 20);
        System.out.println(JsonUtils.toJson(list));
    }

    @Test
    public void testUserRanking() {
        Date now = new Date();
        User user = new User();
        Long userId = 98L;
        user.setId(userId);
        user.setNickname("ws"+userId);
        long rank = wechatStepRankingService.getUserRanking(user, now);
        System.out.println(rank);
    }
}

5.完整代码

WechatStepRankingService.java

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.redis.core.StringRedisTemplate;
import org.springframework.data.redis.core.ZSetOperations;
import org.springframework.stereotype.Service;
import org.springframework.util.CollectionUtils;

import java.util.*;

/**
 * 微信运动-步数排行榜
 * @author Ricky Fung
 */
@Service
public class WechatStepRankingService {

    /**
     * 用户每天步数上限:100万步
     */
    private static final int MAX_STEP = 1000000;

    @Autowired
    private StringRedisTemplate stringRedisTemplate;

    /**
     * 更新用户步数
     * @param user
     * @param date
     * @param step
     */
    public void updateUserStep(User user, Date date, int step) {
        String key = getRankingKey(date);
        //计算score,为了让步数大的排在前面
        double score = MAX_STEP - step;
        stringRedisTemplate.opsForZSet().add(key, serializeUser(user), score);
    }

    /**
     * 获取排行榜列表
     * @param date
     * @param num
     * @return
     */
    public List<RankingItem> getTopN(Date date, int num) {
        String key = getRankingKey(date);
        Set<ZSetOperations.TypedTuple<String>> tuples = stringRedisTemplate.opsForZSet().rangeWithScores(key, 0, num);
        if (CollectionUtils.isEmpty(tuples)) {
            return Collections.emptyList();
        }
        List<RankingItem> rankingList = new ArrayList<>(tuples.size());
        for (ZSetOperations.TypedTuple<String> tuple: tuples) {

            RankingItem item = new RankingItem();
            User user = deserializeUser(tuple.getValue());
            item.setUserId(user.getId());
            item.setNickname(user.getNickname());
            //计算用户步数
            Double score = tuple.getScore();
            item.setStep(MAX_STEP - score.intValue());

            rankingList.add(item);
        }
        return rankingList;
    }

    /**
     * 获取用户排行榜排名
     * @param user
     * @param date
     */
    public long getUserRanking(User user, Date date) {
        String key = getRankingKey(date);
        return stringRedisTemplate.opsForZSet().rank(key, serializeUser(user)).longValue() + 1;
    }

    private String getRankingKey(Date date) {
        return String.format("%s:%s", "wechat:rank", DateUtils.formatDate(date));
    }

    //----------
    private String serializeUser(User user) {
        return String.format("%s#%s", user.getId(), user.getNickname());
    }

    private User deserializeUser(String str) {
        String[] arr = str.split("#");
        User user = new User();
        user.setId(Long.parseLong(arr[0]));
        user.setNickname(arr[1]);
        return user;
    }
}

User.java

public class User {
    private Long id;
    private String nickname;

    //省略 getter/setter
    
}

RankingItem.java

public class RankingItem {
    private Long userId;
    private String nickname;
    private int step;

    //省略 getter/setter
}

高并发处理 - 服务接口限流

在开发高并发系统时有三大利器用来保护系统:缓存、降级 和 限流。

  • 缓存 缓存的目的是提升系统访问速度和增大系统处理容量
  • 降级 降级是当服务出现问题或者影响到核心流程时,需要暂时屏蔽掉,待高峰或者问题解决后再打开
  • 限流 限流的目的是通过对并发访问/请求进行限速,或者对一个时间窗口内的请求进行限速来保护系统,一旦达到限制速率则可以拒绝服务、排队或等待、降级等处理

本篇重点结束 服务API接口限流方案。

分布式限流

原文链接:Scaling your API with rate limiters

lua脚本

  1. concurrent_requests_limiter.lua
local key = KEYS[1]

local capacity = tonumber(ARGV[1])
local timestamp = tonumber(ARGV[2])
local id = ARGV[3]

local count = redis.call("zcard", key)
local allowed = count < capacity

if allowed then
  redis.call("zadd", key, timestamp, id)
end

return { allowed, count }

2.request_rate_limiter.lua

local tokens_key = KEYS[1]
local timestamp_key = KEYS[2]

local rate = tonumber(ARGV[1])
local capacity = tonumber(ARGV[2])
local now = tonumber(ARGV[3])
local requested = tonumber(ARGV[4])

local fill_time = capacity/rate
local ttl = math.floor(fill_time*2)

local last_tokens = tonumber(redis.call("get", tokens_key))
if last_tokens == nil then
  last_tokens = capacity
end

local last_refreshed = tonumber(redis.call("get", timestamp_key))
if last_refreshed == nil then
  last_refreshed = 0
end

local delta = math.max(0, now-last_refreshed)
local filled_tokens = math.min(capacity, last_tokens+(delta*rate))
local allowed = filled_tokens >= requested
local new_tokens = filled_tokens
if allowed then
  new_tokens = filled_tokens - requested
end

redis.call("setex", tokens_key, ttl, new_tokens)
redis.call("setex", timestamp_key, ttl, now)

return { allowed, new_tokens }

Spring @Scheduled执行原理解析

项目使用很多@Scheduled(cron=**) 注解来实现定时任务,既然要用就必须弄清楚的它的实现原理,于是乎翻了一下相关的源码。

Spring 3.0之后增加了调度器功能,提供的@Scheduled 注解, 那么它内部是如何实现的呢?

本文以Spring 4.3.10.RELEASE 源码进行分析,相关源码在 org.springframework.scheduling 包下(spring-context)。

核心类摘要:

  1. ScheduledAnnotationBeanPostProcessor
  2. ScheduledTaskRegistrar
  3. TaskScheduler
  4. ReschedulingRunnable

源码分析

首先,看一下 @Scheduled 注解定义:

package org.springframework.scheduling.annotation;

@Target({ElementType.METHOD, ElementType.ANNOTATION_TYPE})
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Repeatable(Schedules.class)
public @interface Scheduled {

	String cron() default "";

	String zone() default "";

	long fixedDelay() default -1;

	String fixedDelayString() default "";

	long fixedRate() default -1;

	String fixedRateString() default "";

	long initialDelay() default -1;

	String initialDelayString() default "";
}

支持cron表达式("cron" expression)、固定频率(fixedRate)、固定延时(fixedDelay) 3种调度方式。

ScheduledAnnotationBeanPostProcessor

ScheduledAnnotationBeanPostProcessor是@scheduled注解处理类,实现BeanPostProcessor接口(postProcessAfterInitialization方法实现注解扫描和类实例创建)、ApplicationContextAware接口(setApplicationContext方法设置当前ApplicationContext)、org.springframework.context.ApplicationListener(观察者模式,onApplicationEvent方法会被回调)。

ScheduledAnnotationBeanPostProcessor postProcessAfterInitialization扫描所有@scheduled注解,区分cronTasks、fixedDelayTasks、fixedRateTasks。

ScheduledAnnotationBeanPostProcessor 类定义如下:

package org.springframework.scheduling.annotation;

public class ScheduledAnnotationBeanPostProcessor
		implements MergedBeanDefinitionPostProcessor, DestructionAwareBeanPostProcessor,
		Ordered, EmbeddedValueResolverAware, BeanNameAware, BeanFactoryAware, ApplicationContextAware,
		SmartInitializingSingleton, ApplicationListener<ContextRefreshedEvent>, DisposableBean {

}

ScheduledAnnotationBeanPostProcessor 实现BeanPostProcessor接口(postProcessAfterInitialization方法实现注解扫描和类实例创建)、ApplicationContextAware接口(setApplicationContext方法设置当前ApplicationContext)、org.springframework.context.ApplicationListener(观察者模式,onApplicationEvent方法会被回调)、DisposableBean接口(destroy方法中进行资源销毁操作)。

postProcessAfterInitialization扫描所有@scheduled注解:

	@Override
	public Object postProcessAfterInitialization(final Object bean, String beanName) {
		Class<?> targetClass = AopUtils.getTargetClass(bean);
		if (!this.nonAnnotatedClasses.contains(targetClass)) {
			Map<Method, Set<Scheduled>> annotatedMethods = MethodIntrospector.selectMethods(targetClass,
					new MethodIntrospector.MetadataLookup<Set<Scheduled>>() {
						@Override
						public Set<Scheduled> inspect(Method method) {
							Set<Scheduled> scheduledMethods = AnnotatedElementUtils.getMergedRepeatableAnnotations(
									method, Scheduled.class, Schedules.class);
							return (!scheduledMethods.isEmpty() ? scheduledMethods : null);
						}
					});
			if (annotatedMethods.isEmpty()) {
				this.nonAnnotatedClasses.add(targetClass);
				if (logger.isTraceEnabled()) {
					logger.trace("No @Scheduled annotations found on bean class: " + bean.getClass());
				}
			}
			else {
				// Non-empty set of methods
				for (Map.Entry<Method, Set<Scheduled>> entry : annotatedMethods.entrySet()) {
					Method method = entry.getKey();
					for (Scheduled scheduled : entry.getValue()) {
						processScheduled(scheduled, method, bean);
					}
				}
				if (logger.isDebugEnabled()) {
					logger.debug(annotatedMethods.size() + " @Scheduled methods processed on bean '" + beanName +
							"': " + annotatedMethods);
				}
			}
		}
		return bean;
	}

processScheduled方法如下:


	protected void processScheduled(Scheduled scheduled, Method method, Object bean) {
		try {
			Assert.isTrue(method.getParameterTypes().length == 0,
					"Only no-arg methods may be annotated with @Scheduled");

			Method invocableMethod = AopUtils.selectInvocableMethod(method, bean.getClass());
			Runnable runnable = new ScheduledMethodRunnable(bean, invocableMethod);
			boolean processedSchedule = false;
			String errorMessage =
					"Exactly one of the 'cron', 'fixedDelay(String)', or 'fixedRate(String)' attributes is required";

			Set<ScheduledTask> tasks = new LinkedHashSet<ScheduledTask>(4);

			// Determine initial delay
			long initialDelay = scheduled.initialDelay();
			String initialDelayString = scheduled.initialDelayString();
			if (StringUtils.hasText(initialDelayString)) {
				Assert.isTrue(initialDelay < 0, "Specify 'initialDelay' or 'initialDelayString', not both");
				if (this.embeddedValueResolver != null) {
					initialDelayString = this.embeddedValueResolver.resolveStringValue(initialDelayString);
				}
				try {
					initialDelay = Long.parseLong(initialDelayString);
				}
				catch (NumberFormatException ex) {
					throw new IllegalArgumentException(
							"Invalid initialDelayString value \"" + initialDelayString + "\" - cannot parse into integer");
				}
			}

			// Check cron expression
			String cron = scheduled.cron();
			if (StringUtils.hasText(cron)) {
				Assert.isTrue(initialDelay == -1, "'initialDelay' not supported for cron triggers");
				processedSchedule = true;
				String zone = scheduled.zone();
				if (this.embeddedValueResolver != null) {
					cron = this.embeddedValueResolver.resolveStringValue(cron);
					zone = this.embeddedValueResolver.resolveStringValue(zone);
				}
				TimeZone timeZone;
				if (StringUtils.hasText(zone)) {
					timeZone = StringUtils.parseTimeZoneString(zone);
				}
				else {
					timeZone = TimeZone.getDefault();
				}
				tasks.add(this.registrar.scheduleCronTask(new CronTask(runnable, new CronTrigger(cron, timeZone))));
			}

			// At this point we don't need to differentiate between initial delay set or not anymore
			if (initialDelay < 0) {
				initialDelay = 0;
			}

			// Check fixed delay
			long fixedDelay = scheduled.fixedDelay();
			if (fixedDelay >= 0) {
				Assert.isTrue(!processedSchedule, errorMessage);
				processedSchedule = true;
				tasks.add(this.registrar.scheduleFixedDelayTask(new IntervalTask(runnable, fixedDelay, initialDelay)));
			}
			String fixedDelayString = scheduled.fixedDelayString();
			if (StringUtils.hasText(fixedDelayString)) {
				Assert.isTrue(!processedSchedule, errorMessage);
				processedSchedule = true;
				if (this.embeddedValueResolver != null) {
					fixedDelayString = this.embeddedValueResolver.resolveStringValue(fixedDelayString);
				}
				try {
					fixedDelay = Long.parseLong(fixedDelayString);
				}
				catch (NumberFormatException ex) {
					throw new IllegalArgumentException(
							"Invalid fixedDelayString value \"" + fixedDelayString + "\" - cannot parse into integer");
				}
				tasks.add(this.registrar.scheduleFixedDelayTask(new IntervalTask(runnable, fixedDelay, initialDelay)));
			}

			// Check fixed rate
			long fixedRate = scheduled.fixedRate();
			if (fixedRate >= 0) {
				Assert.isTrue(!processedSchedule, errorMessage);
				processedSchedule = true;
				tasks.add(this.registrar.scheduleFixedRateTask(new IntervalTask(runnable, fixedRate, initialDelay)));
			}
			String fixedRateString = scheduled.fixedRateString();
			if (StringUtils.hasText(fixedRateString)) {
				Assert.isTrue(!processedSchedule, errorMessage);
				processedSchedule = true;
				if (this.embeddedValueResolver != null) {
					fixedRateString = this.embeddedValueResolver.resolveStringValue(fixedRateString);
				}
				try {
					fixedRate = Long.parseLong(fixedRateString);
				}
				catch (NumberFormatException ex) {
					throw new IllegalArgumentException(
							"Invalid fixedRateString value \"" + fixedRateString + "\" - cannot parse into integer");
				}
				tasks.add(this.registrar.scheduleFixedRateTask(new IntervalTask(runnable, fixedRate, initialDelay)));
			}

			// Check whether we had any attribute set
			Assert.isTrue(processedSchedule, errorMessage);

			// Finally register the scheduled tasks
			synchronized (this.scheduledTasks) {
				Set<ScheduledTask> registeredTasks = this.scheduledTasks.get(bean);
				if (registeredTasks == null) {
					registeredTasks = new LinkedHashSet<ScheduledTask>(4);
					this.scheduledTasks.put(bean, registeredTasks);
				}
				registeredTasks.addAll(tasks);
			}
		}
		catch (IllegalArgumentException ex) {
			throw new IllegalStateException(
					"Encountered invalid @Scheduled method '" + method.getName() + "': " + ex.getMessage());
		}
	}

finishRefresh方法触发所有监视者方法回调,方法如下:

	@Override
	public void afterSingletonsInstantiated() {
		// Remove resolved singleton classes from cache
		this.nonAnnotatedClasses.clear();

		if (this.applicationContext == null) {
			// Not running in an ApplicationContext -> register tasks early...
			finishRegistration();
		}
	}

	@Override
	public void onApplicationEvent(ContextRefreshedEvent event) {
		if (event.getApplicationContext() == this.applicationContext) {
			// Running in an ApplicationContext -> register tasks this late...
			// giving other ContextRefreshedEvent listeners a chance to perform
			// their work at the same time (e.g. Spring Batch's job registration).
			finishRegistration();
		}
	}

	private void finishRegistration() {
		if (this.scheduler != null) {
			this.registrar.setScheduler(this.scheduler);
		}

		if (this.beanFactory instanceof ListableBeanFactory) {
			Map<String, SchedulingConfigurer> configurers =
					((ListableBeanFactory) this.beanFactory).getBeansOfType(SchedulingConfigurer.class);
			for (SchedulingConfigurer configurer : configurers.values()) {
				configurer.configureTasks(this.registrar);
			}
		}

		if (this.registrar.hasTasks() && this.registrar.getScheduler() == null) {
			Assert.state(this.beanFactory != null, "BeanFactory must be set to find scheduler by type");
			try {
				// Search for TaskScheduler bean...
				this.registrar.setTaskScheduler(resolveSchedulerBean(TaskScheduler.class, false));
			}
			catch (NoUniqueBeanDefinitionException ex) {
				logger.debug("Could not find unique TaskScheduler bean", ex);
				try {
					this.registrar.setTaskScheduler(resolveSchedulerBean(TaskScheduler.class, true));
				}
				catch (NoSuchBeanDefinitionException ex2) {
					if (logger.isInfoEnabled()) {
						logger.info("More than one TaskScheduler bean exists within the context, and " +
								"none is named 'taskScheduler'. Mark one of them as primary or name it 'taskScheduler' " +
								"(possibly as an alias); or implement the SchedulingConfigurer interface and call " +
								"ScheduledTaskRegistrar#setScheduler explicitly within the configureTasks() callback: " +
								ex.getBeanNamesFound());
					}
				}
			}
			catch (NoSuchBeanDefinitionException ex) {
				logger.debug("Could not find default TaskScheduler bean", ex);
				// Search for ScheduledExecutorService bean next...
				try {
					this.registrar.setScheduler(resolveSchedulerBean(ScheduledExecutorService.class, false));
				}
				catch (NoUniqueBeanDefinitionException ex2) {
					logger.debug("Could not find unique ScheduledExecutorService bean", ex2);
					try {
						this.registrar.setScheduler(resolveSchedulerBean(ScheduledExecutorService.class, true));
					}
					catch (NoSuchBeanDefinitionException ex3) {
						if (logger.isInfoEnabled()) {
							logger.info("More than one ScheduledExecutorService bean exists within the context, and " +
									"none is named 'taskScheduler'. Mark one of them as primary or name it 'taskScheduler' " +
									"(possibly as an alias); or implement the SchedulingConfigurer interface and call " +
									"ScheduledTaskRegistrar#setScheduler explicitly within the configureTasks() callback: " +
									ex2.getBeanNamesFound());
						}
					}
				}
				catch (NoSuchBeanDefinitionException ex2) {
					logger.debug("Could not find default ScheduledExecutorService bean", ex2);
					// Giving up -> falling back to default scheduler within the registrar...
					logger.info("No TaskScheduler/ScheduledExecutorService bean found for scheduled processing");
				}
			}
		}

		this.registrar.afterPropertiesSet();
	}

destroy方法暂停所有任务,如下:

	@Override
	public void destroy() {
		synchronized (this.scheduledTasks) {
			Collection<Set<ScheduledTask>> allTasks = this.scheduledTasks.values();
			for (Set<ScheduledTask> tasks : allTasks) {
				for (ScheduledTask task : tasks) {
					task.cancel();
				}
			}
			this.scheduledTasks.clear();
		}
		this.registrar.destroy();
	}

@EnableScheduling注解

接下来 ScheduledAnnotationBeanPostProcessor 是在哪儿生效的呢?我们来看一下@EnableScheduling 注解定义,如下:

package org.springframework.scheduling.annotation;

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Import(SchedulingConfiguration.class)
@Documented
public @interface EnableScheduling {

}

Spring Boot项目中 @EnableScheduling的魔法就在于 @Import(SchedulingConfiguration.class),看一下 SchedulingConfiguration源码:

@Configuration
@Role(BeanDefinition.ROLE_INFRASTRUCTURE)
public class SchedulingConfiguration {

	@Bean(name = TaskManagementConfigUtils.SCHEDULED_ANNOTATION_PROCESSOR_BEAN_NAME)
	@Role(BeanDefinition.ROLE_INFRASTRUCTURE)
	public ScheduledAnnotationBeanPostProcessor scheduledAnnotationProcessor() {
		return new ScheduledAnnotationBeanPostProcessor();
	}

}

瞬间明白了吧,各位看官!

Java 日期、时间小结

概念

时区

整个地球分为二十四时区,每个时区都有自己的本地时间。在国际无线电通信场合,为了统一起见,使用一个统一的时间, 称为通用协调时(UTC, Universal Time Coordinated)。UTC与格林尼治平均时(GMT, Greenwich Mean Time)一样,都与英国伦敦的本地时相同。在本文中,UTC与GMT含义完全相同。

北京时区是东八区,领先UTC八个小时,在电子邮件信头的Date域记为+0800。如果在电子邮件的信头中有这么一行:

Date: Fri, 08 Nov 2002 09:42:22 +0800

说明信件的发送地的地方时间是2002年11月8号,星期五,早上九点四十二分(二十二秒),这个地方的本地时领先UTC八个小时(+0800, 就是东八区时间)。电子邮件信头的Date域使用二十四小时的时钟,而不使用AM和PM来标记上下午。

GMT

格林尼治标准时间(Greenwich Mean Time,GMT)是指位于伦敦郊区的皇家格林尼治天文台的标准时间,因为本初子午线被定义在通过那里的经线。 理论上来说,格林尼治标准时间的正午是指当太阳横穿格林尼治子午线时的时间。由于地球在它的椭圆轨道里的运动速度不均匀,这个时刻可能和实际的太阳时相差16分钟。 地球每天的自转是有些不规则的,而且正在缓慢减速。所以,格林尼治时间已经不再被作为标准时间使用。现在的标准时间——协调世界时(UTC)——由原子钟提供。 自1924年2月5日开始,格林尼治天文台每隔一小时会向全世界发放调时信息。而UTC是基于标准的GMT提供的准确时间。

UTC

Coordinated Universal Time - 世界协调时间(又称世界标准时间、世界统一时间),是经过平均太阳时(以格林威治时间GMT为准)、地轴运动修正后的新时标以及以「秒」为单位的国际原子时所综合精算而成的时间,计算过程相当严谨精密,因此若以「世界标准时间」的角度来说,UTC比GMT来得更加精准。其误差值必须保持在0.9秒以内,若大于0.9秒则由位于巴黎的国际地球自转事务**局发布闰秒,使UTC与地球自转周期一致。

编程实践

1.使用JDK API

1.格式化时间

Date now = new Date();
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSSZ");
System.out.println(sdf.format(now));

2.解析时间

SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSSZ");
System.out.println(sdf.parse("2018-10-17T10:49:53.757+0800"));

2. 使用joda-time

Joda Time, 一个面向 Java™ 平台的易于使用的开源时间/日期库。

1.格式化日期、时间

    @Test
    public void testJodaTime() {
        Date date = new Date();
        System.out.println(format(date, "yyyy-MM-dd HH:mm:ss.SSS"));
    }

    public static String format(Date date, String format) {
        return new DateTime(date).toString(format);
    }

2.解析时间

public void testJodaTime() {
    Date date = parseDate("2018-10-17T02:48:01.000Z", "yyyy-MM-dd'T'HH:mm:ss.SSSZ");
}

public static Date parseDate(String date, String pattern) {
    DateTimeFormatter dateTimeFormatter = DateTimeFormat.forPattern(pattern);
    return DateTime.parse(date, dateTimeFormatter).toDate();
}

Java有用代码片段

json序列化

1.google gson

<dependency>
    <groupId>com.google.code.gson</groupId>
    <artifactId>gson</artifactId>
    <version>2.8.5</version>
</dependency>

2.Gson对象序列化

private static final Gson GSON = new GsonBuilder().setDateFormat("yyyy-MM-dd HH:mm:ss").create();

String jsonString = gson.toJson(obj);  

3.Gson对象反序列化

//1.parse object
Person person = gson.fromJson(jsonString, Person.class);  
//2.parse array
List<Person> list = gson.fromJson(jsonString, new TypeToken<List<Person>>(){}.getType());  

4.Gson JsonSerializer 与 JsonDeserializer

例如 针对Date 序列化与反序列化,如下:

import com.google.gson.*;
import org.apache.commons.lang3.StringUtils;

import java.lang.reflect.Type;
import java.util.Date;

/**
 * @author Ricky Fung
 */
public class JsonUtils {
    private static final Gson GSON = new GsonBuilder()
            .registerTypeAdapter(Date.class, new JsonDateSerializer())
            .create();

    public static String toJson(Object obj) {
        return GSON.toJson(obj);
    }

    public static <T> T parseObject(String json, Class<T> classOfT) {
        return GSON.fromJson(json, classOfT);
    }
    public static <T> T parseObject(String json, Type typeOfT) {
        return GSON.fromJson(json, typeOfT);
    }
}

/**
 * gson Date序列化/反序列化
 * @author Ricky Fung
 */
class JsonDateSerializer implements
        JsonSerializer<Date>,JsonDeserializer<Date> {

    @Override
    public Date deserialize(JsonElement json, Type typeOfT, JsonDeserializationContext context) throws JsonParseException {
        if (json==null || json.isJsonNull()) {
            return null;
        }
        String dateStr = json.getAsString();
        if (StringUtils.isEmpty(dateStr)) {
            return null;
        }
        return DateUtils.parseDate(dateStr);
    }

    @Override
    public JsonElement serialize(Date src, Type typeOfSrc, JsonSerializationContext context) {
        if (src==null) {
            return null;
        }
        return new JsonPrimitive(DateUtils.format(src));
    }
}

Cache

1.google guava

Guava: Google Core Libraries for Java.

1.guava

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>23.0</version>
</dependency>

2.quick started

LoadingCache<K, V> caches = CacheBuilder.newBuilder()
                .maximumSize(100)
                .expireAfterWrite(10, TimeUnit.MINUTES)
                .build(new CacheLoader<K, V>() {
                    @Override
                    public V load(K key) throws Exception {
                        return generateValueByKey(key);
                    }
                });
try {
    caches.get(key);
} catch (ExecutionException e) {
    e.printStackTrace();
}

2. Caffeine

Caffeine is a high performance, near optimal caching library based on Java 8.

1.caffeine

<dependency>
    <groupId>com.github.ben-manes.caffeine</groupId>
    <artifactId>caffeine</artifactId>
    <version>2.6.2</version>
</dependency>

2.quick started

LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
    .maximumSize(10_000)
    .expireAfterWrite(5, TimeUnit.MINUTES)
    .refreshAfterWrite(1, TimeUnit.MINUTES)
    .build(key -> createExpensiveGraph(key));

获取默认类加载器

摘自Spring框架中的 org.springframework.util.ClassUtils 中更合理的实现方式:

	public static ClassLoader getDefaultClassLoader() {
		ClassLoader cl = null;
		try {
			cl = Thread.currentThread().getContextClassLoader();
		}
		catch (Throwable ex) {
			// Cannot access thread context ClassLoader - falling back...
		}
		if (cl == null) {
			// No thread context class loader -> use class loader of this class.
			cl = ClassUtils.class.getClassLoader();
			if (cl == null) {
				// getClassLoader() returning null indicates the bootstrap ClassLoader
				try {
					cl = ClassLoader.getSystemClassLoader();
				}
				catch (Throwable ex) {
					// Cannot access system ClassLoader - oh well, maybe the caller can live with null...
				}
			}
		}
		return cl;
	}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.