tfdream / backend-snippet Goto Github PK
View Code? Open in Web Editor NEWcommon code snippets for backend development needs.
License: Apache License 2.0
common code snippets for backend development needs.
License: Apache License 2.0
Mapper类:
public interface StatsDetailMapper {
//批量insert
int insertBatch(@Param("list") List<StatsDetail> list);
}
Mapper XML文件:
<insert id="insertBatch">
insert into crm_call_center_zc_stats_detail (`date`, start_time, end_time,
agent_id, agent_name, call_way,
call_id, callee, caller,
call_type, call_direction, call_flag,
call_result, duration, recording_duration,
consult_flag, transfer_flag, quality_status,
handle_status, hidden_flag, call_start_time,
call_end_time, satisfy_level, satisfy_value,
voice_aliyun_url, create_time, update_time
)
values
<foreach collection="list" item="item" index="index" separator=",">
(#{item.date,jdbcType=INTEGER}, #{item.startTime,jdbcType=INTEGER}, #{item.endTime,jdbcType=INTEGER},
#{item.agentId,jdbcType=VARCHAR}, #{item.agentName,jdbcType=VARCHAR}, #{item.callWay,jdbcType=INTEGER},
#{item.callId,jdbcType=VARCHAR}, #{item.callee,jdbcType=VARCHAR}, #{item.caller,jdbcType=VARCHAR},
#{item.callType,jdbcType=INTEGER}, #{item.callDirection,jdbcType=INTEGER}, #{item.callFlag,jdbcType=INTEGER},
#{item.callResult,jdbcType=INTEGER}, #{item.duration,jdbcType=INTEGER}, #{item.recordingDuration,jdbcType=INTEGER},
#{item.consultFlag,jdbcType=INTEGER}, #{item.transferFlag,jdbcType=INTEGER}, #{item.qualityStatus,jdbcType=INTEGER},
#{item.handleStatus,jdbcType=INTEGER}, #{item.hiddenFlag,jdbcType=INTEGER}, #{item.callStartTime,jdbcType=TIMESTAMP},
#{item.callEndTime,jdbcType=TIMESTAMP}, #{item.satisfyLevel,jdbcType=INTEGER}, #{item.satisfyValue,jdbcType=VARCHAR},
#{item.voiceAliyunUrl,jdbcType=VARCHAR}, #{item.createTime,jdbcType=TIMESTAMP}, #{item.updateTime,jdbcType=TIMESTAMP}
)
</foreach >
</insert>
Mapper类定义:
public interface UserVisitingMapper {
List<UserVisiting> selectByIds(List<Long> ids);
}
XML文件:
<select id="selectByIds" resultMap="BaseResultMap" parameterType="java.util.List" >
select
<include refid="Base_Column_List" />
from sys_v2_user_visiting
where id IN
<foreach collection="list" index="index" item="item" open="(" separator="," close=")">
#{item}
</foreach>
</select>
spring.http.multipart.max-file-size=1MB # Max file size. Values can use the suffixed "MB" or "KB" to indicate a Megabyte or Kilobyte size.
spring.http.multipart.max-request-size=10MB # Max request size. Values can use the suffixed "MB" or "KB" to indicate a Megabyte or Kilobyte size.
spring.servlet.multipart.max-file-size=1MB # Max file size. Values can use the suffixes "MB" or "KB" to indicate megabytes or kilobytes, respectively.
spring.servlet.multipart.max-request-size=10MB # Max request size. Values can use the suffixes "MB" or "KB" to indicate megabytes or kilobytes, respectively.
擂台分3场进行: 4.10-4.13、4.15-4.18、4.20-4.22。
每个场次根据用户净消耗积分进行排名,净消耗积分相同的先达到的排在前面,每场取前20名用户送出精美礼品。
用户净消耗积分 = 总消耗积分 - 总退货返还积分;
注: 积分为整型
Redis中SortedSet 非常适合用户排序,用户净消耗积分 当作 SortedSet的 score,同时为了满足净消耗积分相同先达到的排在前面,需要根据达到时间计算出一个 [0, 1)的加成。
用户积分变动时积分系统会发MQ通知,故直接订阅此通知即可。MQ消息格式:
{
"id": 1141603,
"shopId": 5,
"memberId": 503928,
"changeValue": 239,
"changeType": 3,
"operatorType": 2,
"comment": "积分兑换",
"createTime": "2021-04-01 10:49:04",
"beforeChange": 8602,
"afterChange": 8363,
"orderId": 425831,
"orderGoodsIds": "638408"
}
/**
* @author Ricky Fung
*/
@Component
public class RankingRedisManager {
private final Logger LOG = LoggerFactory.getLogger(this.getClass());
@Resource
private StringRedisTemplate stringRedisTemplate;
private DefaultRedisScript<List> rankScript;
public RankingRedisManager() {
this.rankScript = new DefaultRedisScript<>();
this.rankScript.setScriptSource(new ResourceScriptSource(new ClassPathResource("scripts/lua/rank_user_calc.lua")));
this.rankScript.setResultType(List.class);
}
public void updateRankingList(RankingRound round, MemberInfoDTO memberInfoDTO,
RankingCreditOpType opType, int changeCredit, DateTime now) {
Long activityId = round.getActivityId();
Integer seq = round.getSeq();
Integer memberId = memberInfoDTO.getMemberId();
int delta = changeCredit; //积分变化值
if (opType == RankingCreditOpType.REFUND) { //退款
delta = 0 - changeCredit;
}
String userKey = RankingConstant.getRoundUserKey(activityId, seq, Long.valueOf(memberId));
String rankKey = RankingConstant.getRoundRankKey(activityId, seq);
List<String> keys = Arrays.asList(userKey, rankKey);
//会员信息序列化
String member = serialize(memberInfoDTO);
//计算时间因子
double timeWeight = calcTimeWeight(now);
//执行LUA脚本
List<Long> result = stringRedisTemplate.execute(this.rankScript, keys,
String.valueOf(delta), String.valueOf(timeWeight),
member, String.valueOf(maxRankNum));
LOG.info("擂台赛-更新排行榜, memberId={}, activityId={}, seq={}, opType={}, changeCredit={}, 更新redis结果={}",
memberId, activityId, seq, opType, changeCredit, JsonUtils.toJson(result));
}
首先,我们看看会员信息 序列化和反序列化,如下:
@Value("${ranking.serialize.delimiter:#}")
private String delimiter;
//======== 序列化
private RankingListItem deserialize(String data) {
RankingListItem item = new RankingListItem();
int index = data.indexOf(delimiter);
if (index > 0) {
item.setUserId(Long.valueOf(data.substring(0, index)));
item.setNickname(data.substring(index + delimiter.length()));
}
return item;
}
private String serialize(MemberInfoDTO memberInfoDTO) {
return serialize(memberInfoDTO.getMemberId(), memberInfoDTO.getMobile());
}
private String serialize(Integer memberId, String nickname) {
StringBuilder sb = new StringBuilder(40);
sb.append(memberId).append(delimiter)
.append(nickname);
return sb.toString();
}
其次,是时间加成算法,如下:
@Value("${ranking.epoch.left:2021-01-01}")
private String epochLeft;
@Value("${ranking.epoch.right:2031-01-01}")
private String epochRight;
private double calcTimeWeight(DateTime now) {
//计算时间因子
DateTime left = DateUtils.parseDateTime(epochLeft, DateUtils.DATE_STANDARD_FORMAT);
DateTime right = DateUtils.parseDateTime(epochRight, DateUtils.DATE_STANDARD_FORMAT);
double timeWeight = (now.getMillis() - left.getMillis()) * 1.0d / (right.getMillis() - left.getMillis());
return timeWeight;
}
注: 这里有个小技巧,就是 时间加成 取值范围 [0, 1),这样的话 SortedSet 中元素的score 向下取整就是 净消耗积分值了。
这里有的同学可能会说了,如果有的业务场景下 数值是 浮点数 怎么办呢?例如 用户总投资金额(精确到小数点后2位),其实办法也很简单,把 用户总投资金额 乘以100 就转换为整数了,查询排行榜的时候 score 向下取整然后除以100 即是真实的投资金额。
最后,我们来看一下 LUA脚本,rank_user_calc.lua:
local user_total_key = KEYS[1]
local ranking_key = KEYS[2]
local delta = tonumber(ARGV[1])
local time_weight = tonumber(ARGV[2])
local member = ARGV[3]
local max_ranking_size = tonumber(ARGV[4])
local total_credit = tonumber(redis.call("INCRBY", user_total_key, delta))
local score = total_credit + time_weight;
if total_credit < 0 then
score = total_credit - time_weight;
end
redis.call("ZADD", ranking_key, score, member)
local rank_size = tonumber(redis.call("ZCARD", ranking_key))
if rank_size == nil then
rank_size = 0
end
local stop = rank_size - max_ranking_size - 1
local delete_cnt = 0
if stop > 0 then
delete_cnt = tonumber(redis.call("ZREMRANGEBYRANK", ranking_key, 0, stop))
end
return { total_credit, rank_size, delete_cnt}
因为Redis SortedSet 默认是按照 score 从小到大排序的,所以我们这里查询的时候不能用ZRANGE key start stop [WITHSCORES]
,而是使用 ZREVRANGE key start stop [WITHSCORES]
命令,如下:
public List<RankingListItem> getRankingList(Long activityId, Integer seq) {
return getRankingList(activityId, seq, true);
}
/**
* 获取排行榜
* @param activityId
* @param seq
* @param mask 用户昵称是否打码
* @return
*/
public List<RankingListItem> getRankingList(Long activityId, Integer seq, boolean mask) {
String key = RankingConstant.getRoundRankKey(activityId, seq);
Set<ZSetOperations.TypedTuple<String>> tupleSet = stringRedisTemplate.opsForZSet().reverseRangeWithScores(key, 0, topRankNum - 1);
if (CollectionUtils.isEmpty(tupleSet)) {
return Collections.EMPTY_LIST;
}
List<RankingListItem> list = new ArrayList<>(tupleSet.size());
int rank = 1;
for (ZSetOperations.TypedTuple<String> tuple : tupleSet) {
RankingListItem item = deserialize(tuple.getValue());
item.setRank(rank);
item.setTotalCredit(scoreToInt(tuple.getScore()));
if (mask) {
item.setNickname(maskNickname(item.getNickname(), 3, 7));
}
rank++;
list.add(item);
}
return list;
}
private RankingListItem deserialize(String data) {
RankingListItem item = new RankingListItem();
int index = data.indexOf(delimiter);
if (index > 0) {
item.setUserId(Long.valueOf(data.substring(0, index)));
item.setNickname(data.substring(index + delimiter.length()));
}
return item;
}
private Integer scoreToInt(Double score) {
if (score == null) {
return IntegerConstant.ZERO;
}
return score.intValue();
}
List<String> list = new ArrayList<>();
list.stream().sorted(
Comparator.comparing(a->address.indexOf(a))
).forEach(System.out :: println);
也可以像下面这样不使用比较器:
list.stream().sorted(
(a,b)-> a.getPriority() - b.getPriority()
).forEach(System.out :: println);//由大到小排序
List<Person> peoples = Arrays.asList(
new Person("java", 22),
new Person("js", 35),
new Person("css", 31)
);
Person result1 = peoples.stream()
.filter(p -> "java".equals(p.getName()))
.findAny()
.orElse(null);
System.out.println(result1);
List<String> agentIds = list.stream().map(StatsDetail::getAgentId).collect(Collectors.toList());
list.stream().mapToDouble(User::getHeight).sum()//和
list.stream().mapToDouble(User::getHeight).max()//最大
list.stream().mapToDouble(User::getHeight).min()//最小
list.stream().mapToDouble(User::getHeight).average()//平均值
List<Integer> idList = orderIdList.stream().distinct().collect(Collectors.toList());
// 根据name去重
List<Person> unique = persons.stream().collect(
Collectors.collectingAndThen(
Collectors.toCollection(() -> new TreeSet<>(Comparator.comparing(Person::getName))), ArrayList::new)
);
从一个Person对象的List集合,取出id和name组成一个map集合:
Map<String, String> collect = list.stream().collect(Collectors.toMap(p -> p.getId(), p -> p.getName()));
@Configuration
@ConditionalOnClass({OkHttpClient.class, RestTemplate.class})
@EnableConfigurationProperties(LegoRestTemplateProperties.class)
public class RestTemplateAutoConfiguration {
private final Logger LOG = LoggerFactory.getLogger(this.getClass());
@Autowired
private LegoRestTemplateProperties legoRestTemplateProperties;
@Bean
@ConditionalOnMissingBean(OkHttpClient.class)
public OkHttpClient okHttpClient() {
Integer readTimeoutMillis = legoRestTemplateProperties.getReadTimeout();
Integer writeTimeoutMillis = legoRestTemplateProperties.getWriteTimeout();
Integer connectTimeoutMillis = legoRestTemplateProperties.getConnectTimeout();
LOG.info("[Lego框架] OkHttp模块初始化开始, readTimeoutMillis:{}, writeTimeoutMillis:{},connectTimeoutMillis:{}",
readTimeoutMillis, writeTimeoutMillis, connectTimeoutMillis);
OkHttpClient client = new OkHttpClient.Builder()
.readTimeout(readTimeoutMillis, TimeUnit.MILLISECONDS)
.writeTimeout(writeTimeoutMillis, TimeUnit.MILLISECONDS)
.connectTimeout(connectTimeoutMillis, TimeUnit.MILLISECONDS)
.build();
return client;
}
@Bean
public OkHttp3ClientHttpRequestFactory okHttp3ClientHttpRequestFactory(OkHttpClient okHttpClient) {
OkHttp3ClientHttpRequestFactory factory = new OkHttp3ClientHttpRequestFactory(okHttpClient);
return factory;
}
@DistributedTracing
@Bean
@ConditionalOnMissingBean(RestTemplate.class)
public RestTemplate restTemplate(OkHttp3ClientHttpRequestFactory okHttp3ClientHttpRequestFactory) {
LOG.info("[Lego框架] RestTemplate模块初始化开始");
RestTemplate restTemplate = new RestTemplate(okHttp3ClientHttpRequestFactory);
List<HttpMessageConverter<?>> messageConverters = restTemplate.getMessageConverters();
//1.解决中文乱码
messageConverters.set(1, new StringHttpMessageConverter(StandardCharsets.UTF_8));
return restTemplate;
}
}
String reqUrl = sb.toString();
try {
LOG.info("客服呼叫中心-获取开放平台token, 请求接口:{}", reqUrl);
String json = restTemplate.getForObject(reqUrl, String.class);
LOG.info("客服呼叫中心-获取开放平台token, 请求接口:{} 响应结果:{}", reqUrl, json);
if (StringUtils.isNotEmpty(json)) {
ZcOpenResult result = JsonUtils.parseObject(json, new TypeToken<ZcOpenResult<ZcOpenTokenInfo>>(){}.getType());
if (result.isSuccess()) {
return ResponseDTO.ok(result.getItem());
}
}
} catch (Exception e) {
LOG.error(String.format("客服呼叫中心-获取开放平台token异常, 请求接口:%s", reqUrl), e);
}
String reqUrl = sb.toString();
LOG.info("客服呼叫中心-查询坐席列表开始, 请求接口:{}", reqUrl);
try {
//1.获取token
String token = getAccessToken();
//创建请求头
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.APPLICATION_JSON);
headers.add("token", token);
HttpEntity<String> httpEntity = new HttpEntity<>("", headers);
ResponseEntity<String> responseEntity = restTemplate.exchange(reqUrl, HttpMethod.GET, httpEntity, String.class);
if (responseEntity.getStatusCode() != HttpStatus.OK) {
LOG.info("客服呼叫中心-查询坐席列表, 请求接口:{} 响应状态码:{}", reqUrl, responseEntity.getStatusCode());
return ResponseDTO.systemError();
}
String json = responseEntity.getBody();
LOG.info("客服呼叫中心-查询坐席列表, 请求接口:{} 响应结果:{}", reqUrl, json);
if (StringUtils.isNotEmpty(json)) {
ZcOpenListResult result = JsonUtils.parseObject(json, new TypeToken<ZcOpenListResult<ZcAgentInfo>>(){}.getType());
if (result.isSuccess()) {
return ResponseDTO.ok(result.getItems());
} else if (result.getCode().equals("900002")) { //token失效
LOG.info("客服呼叫中心-查询坐席列表, 请求接口:{} token:{} 已失效或不存在", reqUrl, token);
deleteCacheToken();
}
}
} catch (Exception e) {
LOG.error(String.format("客服呼叫中心-查询坐席列表异常, 请求接口:{}", reqUrl), e);
}
@GetMapping("/check/token")
public LoginInfoDTO checkToken(String checkToken,HttpServletRequest request){
System.out.println(checkToken);
String url = "http://**.com/api/cas/authenticate";
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.MULTIPART_FORM_DATA);
MultiValueMap map = new LinkedMultiValueMap();
map.add("checkToken",checkToken);
HttpEntity requestBody = new HttpEntity(map, headers);
ResponseEntity<LoginInfoDTO> responseEntity = restTemplate.postForEntity(url, requestBody, LoginInfoDTO.class);
LoginInfoDTO loginInfoDTO = responseEntity.getBody();
return loginInfoDTO;
}
String reqUrl = sb.toString();
ZcAgentCallHoldReq req = new ZcAgentCallHoldReq();
req.setCompanyId(companyId);
req.setAppId(appId);
req.setCallId(unholdReq.getCallId());
req.setAgentId(agentId);
req.setDirection(unholdReq.getDirection().toString());
req.setCaller(unholdReq.getCallerNumber());
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.APPLICATION_JSON);
HttpEntity<ZcAgentCallHoldReq> httpEntity = new HttpEntity<>(req, headers);
LOG.info("客服呼叫中心-取消通话保持, userId:{}, agentId:{} 请求接口:{} 请求参数:{}", userId, agentId, reqUrl, JsonUtils.toJson(req));
String json = restTemplate.postForObject(reqUrl, httpEntity, String.class);
LOG.info("客服呼叫中心-取消通话保持, userId:{}, agentId:{} 接口响应结果:{}", userId, agentId, json);
if (StringUtils.isNotEmpty(json)) {
JsonObject jo = new JsonParser().parse(json).getAsJsonObject();
if (BpConstants.ZC_SUCCESS_CODE.equals(jo.get("retCode").getAsString())) {
return ResponseDTO.ok();
} else {
alertManager.alertAsync(String.format("调用智齿【取消通话保持接口】失败, reqUrl:%s", reqUrl), json);
}
return ResponseDTO.invalidParam(jo.get("retMsg").getAsString());
}
/**
* @author Ricky Fung
*/
public abstract class NamingUtils {
/**
* 下划线命名转为驼峰命名
* @param field
* @return
*/
public static String mapUnderscoreToCamelCase(String field){
char[] chs = field.toCharArray();
StringBuilder sb = new StringBuilder(chs.length);
boolean prevUnderscore = false;
for(int i=0; i<chs.length; i++) {
if (chs[i] == '_') {
prevUnderscore = true;
continue;
}
sb.append(prevUnderscore ? Character.toUpperCase(chs[i]) : chs[i]);
prevUnderscore = false;
}
return sb.toString();
}
/**
* 驼峰命名转为下划线命名
* @param field
* @return
*/
public static String mapCamelCaseToUnderscore(String field){
char[] chs = field.toCharArray();
StringBuilder sb = new StringBuilder(chs.length+6);
for(int i=0; i<chs.length; i++){
if(Character.isUpperCase(chs[i])){
sb.append("_");
sb.append(Character.toLowerCase(chs[i]));
} else {
sb.append(chs[i]);
}
}
return sb.toString();
}
}
实现类似JWT token身份认证和授权。
JWT 即 JSON Web Token 是目前最流行的跨域身份验证解决方案。JWT由3个部分组成:JWT头、有效载荷和签名。
签名哈希部分是对上面两部分数据签名,通过指定的算法生成哈希,以确保数据不会被篡改。
首先,需要指定一个密码(secret)。该密码仅仅为保存在服务器中,并且不能向用户公开。然后,使用标头中指定的签名算法(默认情况下为HMAC SHA256)根据以下公式生成签名。
HMACSHA256(base64UrlEncode(header) + "." + base64UrlEncode(payload), secret)
在计算出签名哈希后,JWT头,有效载荷和签名哈希的三个部分组合成一个字符串,每个部分用"."分隔,就构成整个JWT对象。
使用效果如下:
/**
* @author Ricky Fung
*/
@RestController
@RequestMapping("/admin/user")
public class HomeController {
private final Logger LOG = LoggerFactory.getLogger(this.getClass());
@Resource
private HomeService homeService;
@PostMapping("/login")
public ServiceResult doLogin(@RequestParam("mobile") String mobile,
@RequestParam("password") String password) {
try {
return homeService.doLogin(mobile, password, DateTime.now());
} catch (Exception e) {
LOG.error("用户到访登记系统-后台管理-登录-异常。mobile:{},password:{}", mobile, password, e);
return ServiceResult.systemError("登录失败");
}
}
@RequiredAuth
@PostMapping("/logout")
public ServiceResult logout() {
Long userId = SecurityContextHolder.getContext().getUserId();
return homeService.logout(userId);
}
@RequiredAuth
@GetMapping("/user-info")
public ServiceResult userInfo() {
Long userId = SecurityContextHolder.getContext().getUserId();
return homeService.userInfo(userId);
}
}
AuthInterceptor.java
import io.dreamstudio.vr.admin.manager.TokenManager;
import io.dreamstudio.vr.admin.web.vo.AuthTokenInfo;
import io.dreamstudio.vr.commons.auth.SecurityContext;
import io.dreamstudio.vr.commons.auth.SecurityContextHolder;
import io.dreamstudio.vr.commons.auth.annotation.OptionalAuth;
import io.dreamstudio.vr.commons.auth.annotation.RequiredAuth;
import io.dreamstudio.vr.commons.constant.VisitingConstant;
import io.dreamstudio.vr.commons.exceptions.InvalidAuthException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.core.annotation.AnnotationUtils;
import org.springframework.web.method.HandlerMethod;
import org.springframework.web.servlet.handler.HandlerInterceptorAdapter;
import javax.annotation.Resource;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.util.Date;
/**
* @author Ricky Fung
*/
public class AuthInterceptor extends HandlerInterceptorAdapter {
private final Logger LOG = LoggerFactory.getLogger(this.getClass());
@Resource
private TokenManager tokenManager;
@Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
if (!(handler instanceof HandlerMethod)) {
return true;
}
HandlerMethod handlerMethod = (HandlerMethod) handler;
RequiredAuth requiredAuth = AnnotationUtils.findAnnotation(handlerMethod.getMethod(), RequiredAuth.class);
if (requiredAuth != null) {
return checkAuth(request, handlerMethod.getMethod().getName(), true);
}
OptionalAuth optionalAuth = AnnotationUtils.findAnnotation(handlerMethod.getMethod(), OptionalAuth.class);
if (optionalAuth != null) {
return checkAuth(request, handlerMethod.getMethod().getName(), false);
}
return true;
}
private boolean checkAuth(HttpServletRequest request,
String methodName, boolean required) throws InvalidAuthException {
//设置默认
SecurityContextHolder.setContext(DefaultSecurityContext.NONE);
Date now = new Date();
String tokenVal = request.getHeader("Authorization");
//1.校验token
AuthTokenInfo tokenInfo = tokenManager.validateToken(tokenVal);
if (tokenInfo==null) {
if (required) {
throw new InvalidAuthException("身份校验-token解密为NULL, token:"+tokenVal);
}
return true;
}
Long userId = tokenInfo.getUserId();
//2.验证token过期时间
Date tokenExpireTime = tokenInfo.getExpiryTime();
if (tokenExpireTime.before(now)) {
LOG.warn("用户到访登记系统-后台管理-身份校验, token过期, userId:{}, token:{}", userId, tokenVal);
if (required) {
throw new InvalidAuthException("身份校验-token已过期, token:"+tokenVal);
}
return true;
}
//3.与redis中token做对比(防止已失效)
String redisToken = tokenManager.getRedisToken(userId);
if (!tokenVal.equals(redisToken)) {
if (required) {
LOG.warn("用户到访登记系统-后台管理-身份校验, token不匹配, userId:{}, token:{}, redisToken:{}", userId, tokenVal, redisToken);
throw new InvalidAuthException("身份校验-token不匹配, userId:"+userId);
}
return true;
}
tokenManager.refreshSession(userId, now);
SecurityContextHolder.setContext(new SecurityContext(userId, tokenVal));
return true;
}
@Override
public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) throws Exception {
SecurityContextHolder.clear();
}
}
TokenManager.java
import io.dreamstudio.vr.admin.util.TokenUtils;
import io.dreamstudio.vr.admin.web.vo.AuthTokenInfo;
import io.dreamstudio.vr.commons.constant.RedisConstant;
import io.dreamstudio.vr.commons.exceptions.InvalidAuthException;
import io.dreamstudio.vr.commons.util.DateUtils;
import org.joda.time.DateTime;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.data.redis.core.StringRedisTemplate;
import org.springframework.stereotype.Component;
import javax.annotation.Resource;
import java.util.Date;
import java.util.concurrent.TimeUnit;
/**
* @author Ricky Fung
*/
@Component
public class TokenManager {
private final Logger LOG = LoggerFactory.getLogger(this.getClass());
private static final String SECRET = "VIP_2020_FUCKING_XG";
/**
* 过期时间
*/
private long sessionExpirySeconds = 3600;
/**
* 会话刷新过期时间
*/
private int sessionRefreshSeconds = 1800;
@Resource
private StringRedisTemplate stringRedisTemplate;
public String genAndSaveToken(Long userId, DateTime now) {
//计算token过期时间
Date expiryDate = now.plusDays(2).toDate();
//生成token
String token = TokenUtils.genToken(userId, expiryDate, SECRET);
String adminLoginKey = RedisConstant.getAdminLoginKey(userId);
//过期时间
stringRedisTemplate.opsForValue().set(adminLoginKey, token, sessionExpirySeconds, TimeUnit.SECONDS);
LOG.info("用户到访登记系统-后台管理-登录, userId:{}, 失效时间:{}", userId, DateUtils.format(expiryDate));
return token;
}
public void deleteLoginCache(Long accountId) {
String adminLoginKey = RedisConstant.getAdminLoginKey(accountId);
stringRedisTemplate.delete(adminLoginKey);
}
//----------拦截器
public AuthTokenInfo validateToken(String token) throws InvalidAuthException {
return TokenUtils.validate(token, SECRET);
}
//延长会话时间
public void refreshSession(Long accountId, Date now) {
String adminLoginKey = RedisConstant.getAdminLoginKey(accountId);
//延长会话时间
Date expiryTime = new DateTime(now).plusSeconds(sessionRefreshSeconds).toDate();
stringRedisTemplate.expireAt(adminLoginKey, expiryTime);
}
public String getRedisToken(Long userId) {
String adminLoginKey = RedisConstant.getAdminLoginKey(userId);
return stringRedisTemplate.opsForValue().get(adminLoginKey);
}
}
TokenUtils.java
import io.dreamstudio.vr.admin.web.vo.AuthTokenInfo;
import io.dreamstudio.vr.commons.exceptions.InvalidAuthException;
import org.apache.commons.codec.digest.DigestUtils;
import org.apache.commons.lang3.StringUtils;
import java.nio.charset.StandardCharsets;
import java.util.Base64;
import java.util.Date;
/**
* token工具类
* @author Ricky Fung
*/
public abstract class TokenUtils {
/**
* 生成token
* @param userId
* @param expiryTime
* @param secret
* @return
*/
public static String genToken(Long userId, Date expiryTime, String secret) {
long ttl = expiryTime.getTime();
String sig = genSig(userId, ttl, secret);
String token = String.format("%s#%s#%s", userId, ttl, sig);
return base64Encode(token);
}
/**
* 解密token
* @param str
* @param secret
* @return
* @throws InvalidAuthException
*/
public static AuthTokenInfo validate(String str, String secret) throws InvalidAuthException {
if (StringUtils.isEmpty(str)) {
return null;
}
String token = base64Decode(str);
String[] arr = token.split("#");
if (arr.length!=3) {
throw new InvalidAuthException("token格式不准确");
}
Long userId = Long.valueOf(arr[0]);
long ttl = Long.parseLong(arr[1]);
String sig = genSig(userId, ttl, secret);
if (!sig.equals(arr[2])) {
throw new InvalidAuthException("token签名验证不通过");
}
AuthTokenInfo tokenInfo = new AuthTokenInfo();
tokenInfo.setUserId(userId);
tokenInfo.setExpiryTime(new Date(ttl));
return tokenInfo;
}
/**
* md5加密
* @param data
* @return
*/
public static String md5Hex(String data) {
return DigestUtils.md5Hex(stringToBytes(data));
}
private static byte[] stringToBytes(String data) {
return data.getBytes(StandardCharsets.UTF_8);
}
private static String bytesToString(byte[] buf) {
return new String(buf, StandardCharsets.UTF_8);
}
//----------
private static String genSig(Long userId, long ttl, String secret) {
String sig = String.format("%s#%s#%s", userId, ttl, secret);
return DigestUtils.sha1Hex(stringToBytes(sig));
}
private static String base64Encode(String data) {
return bytesToString(Base64.getEncoder().encode(stringToBytes(data)));
}
private static String base64Decode(String data) {
return bytesToString(Base64.getDecoder().decode(stringToBytes(data)));
}
}
SecurityContextHolder.java
/**
* @author Ricky Fung
*/
public class SecurityContextHolder {
private static final ThreadLocal<SecurityContext> contextHolder = new ThreadLocal<SecurityContext>();
public static void setContext(SecurityContext context) {
if (context==null) {
throw new NullPointerException("context is NULL");
}
contextHolder.set(context);
}
public static SecurityContext getContext() {
return contextHolder.get();
}
public static void clear() {
contextHolder.remove();
}
}
SecurityContext.java
public interface SecurityContext {
Long getUserId();
String getToken();
long getExpiryDate();
}
DefaultSecurityContext.java
/**
* @author Ricky Fung
*/
public class DefaultSecurityContext implements SecurityContext {
public static final SecurityContext NONE = new DefaultSecurityContext(null, null);
private Long userId;
private String token;
private long expiryDate;
public DefaultSecurityContext(Long userId, String token) {
this.userId = userId;
this.token = token;
}
public DefaultSecurityContext(Long userId, String token, long expiryDate) {
this.userId = userId;
this.token = token;
this.expiryDate = expiryDate;
}
@Override
public Long getUserId() {
return userId;
}
@Override
public String getToken() {
return token;
}
@Override
public long getExpiryDate() {
return expiryDate;
}
}
AuthTokenInfo.java
import java.util.Date;
/**
* @author Ricky Fung
*/
public class AuthTokenInfo {
private Long userId;
private Date expiryTime;
public Long getUserId() {
return userId;
}
public void setUserId(Long userId) {
this.userId = userId;
}
public Date getExpiryTime() {
return expiryTime;
}
public void setExpiryTime(Date expiryTime) {
this.expiryTime = expiryTime;
}
}
RequiredAuth.java
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
/**
* 必需校验登录
* @author Ricky Fung
*/
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface RequiredAuth {
}
OptionalAuth.java
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
/**
* 可选校验登录
* @author Ricky Fung
*/
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface OptionalAuth {
}
当前研发工作中经常出现因数据库表、数据库表字段格式不规则而影响开发进度的问题,在后续开发使用原来数据库表时,也会因为数据库表的可读性不够高,表字段规则不统一,造成数据查询,数据使用效率低的问题,所以有必要整理出一套合适的数据库表字段命名规范来解决优化这些问题。
本文是一篇包含了数据库命名、数据库表命名、数据库表字段命名及SQL语言编码的规范文档,针对研发中易产生的问题和常见错误做了一个整理和修改,为日后涉及到数据库相关的研发工作做好准备。
采用26个英文字母(区分大小写)和0-9的自然数(经常不需要)加上下划线''组成,命名简洁明确,多个单词用下划线''分隔,一个项目一个数据库,多个项目慎用同一个数据库。
数据库字符集必须使用utf8mb4,MySQL中utf8mb4 才是真正的utf8字符集,可以存储任意字符。
示例如下:
数据表命名规范如下:
命名规范如下:
①冗余:
错误示例:yy_alllive_video_recomment yy_alllive_open_close_log
说明:去除项目名,简化表名长度,去”yy_”
②相同类别表命名存在差异,管理性差
错误示例:yy_all_live_category yy_alllive_comment_user
说明:去除项目名,统一命名规则,均为”yy_alllive_”开头即可
③命名格式存在差异
错误示例:yy_showfriend yy_user_getpoints yy_live_program_get
说明:去除项目名,统一命名规则,动宾短语分离且动宾逻辑顺序统一
字段命名规范如下:
正确示例:
①大小写规则不统一
错误示例:user_id houseID
说明:使用统一规则,修改为”user_id”,”house_id”
②加下划线规则不统一
错误示例:username userid isfriend isgood
说明:使用下划线进行分类,提升可性,方便管理,修改为”user_name”,”user_id”,”is_friend”,”is_good”
③字段表示不明确
错误示例:uid pid
说明:使用完整名称,提高可读性,修改为”user_id”,”person_id”
字段类型规范如下:
(1)所有关键字必须大写,如:INSERT、UPDATE、DELETE、SELECT及其子句,IF……ELSE、CASE、DECLARE等
(2)所有函数及其参数中除用户变量以外的部分必须大写
(3)在定义变量时用到的数据类型必须小写
注释可以包含在批处理中,在触发器、存储过程中包含描述性注释将大大增加文本的可读性和可维护性,本规范建议:
(1)注释以英文为主,实际应用中,发现以中文注释的SQL语句版本在英文环境中不可用,为避免后续版本执行过程中发生某些异常错误,建议使用英文注释
(2)注释尽可能详细、全面创建每一数据对象前,应具体描述该对象的功能和用途,传入参数的含义应该有所说明,如果取值范围确定,也应该一并说明,取值有特定含义的变量(如boolean类型变量),应给出每个值的含义
(3)注释语法:单行注释、多行注释
单行注释:注释前有两个连字符(--)对变量、条件子句可以采用该类注释
多行注释:符号之间的内容为注释内容,对某项完整的操作建议使用该类注释
(4)注释简洁,同时应描述清晰
(5)函数注释:
$ brew install mysql
brew list mysql #查看已经安装的mysql路径
cd /usr/local/Cellar/mysql/${mysql.version}/bin
mysql.server start
$ brew uninstall mysql
$ sudo rm -rf /usr/local/Cellar/mysql
$ brew cleanup
$ sudo rm -rf /usr/local/var/mysql
首先,我们要知道 springboot 默认使用 jackson 解析 json(当然这里也是可以配置使用其他 json 解析框架)。
在不配置其他 json 解析的情况下,我们可以通过 spring boot 提供的注解和配置 来让 jackson 帮助我们提高开发效率
application.properties中加入如下代码:
spring.jackson.date-format=yyyy-MM-dd HH:mm:ss
spring.jackson.time-zone=GMT+8
如果个别实体需要使用其他格式的 pattern,在实体上加入注解即可:
import org.springframework.format.annotation.DateTimeFormat;
import com.fasterxml.jackson.annotation.JsonFormat;
public class UserDTO {
@JsonFormat(timezone = "GMT+8",pattern = "yyyy-MM-dd")
@DateTimeFormat(pattern="yyyy-MM-dd")
private Date createdDate;
}
使用 @responsebody 时 忽略 json 中值为null的属性,只需在 application.properties中加入如下配置:
spring.jackson.default-property-inclusion=non-null
或者在类上声明 @JsonInclude(JsonInclude.Include.NON_NULL)
,示例如下:
import java.io.Serializable;
import com.fasterxml.jackson.annotation.JsonInclude;
@JsonInclude(JsonInclude.Include.NON_NULL)//该注解配合jackson,序列化时忽略 null属性
public class UserDTO implements Serializable {
}
身份证号中包含了户籍地、年龄、性别等信息,关于第二代身份证号编码规则请参看这篇文章: 第二代身份证号码编排规则
IdCardUtils.java
import io.dreamstudio.vr.commons.enums.GenderEnum;
import org.apache.commons.lang3.StringUtils;
import java.time.LocalDate;
import java.time.Period;
import java.time.format.DateTimeFormatter;
/**
* @author Ricky Fung
*/
public abstract class IdCardUtils {
//旧一代身份证号
private static final int OLD_ID_NO_LENGTH = 15;
//新一代身份证号 18位
private static final int NEW_ID_NO_LENGTH = 18;
/**
* 根据身份证号获取性别
* @param idNo 身份证号
* @return 性别(1: 男,0: 女,-1: 未知)
*/
public static GenderEnum getGender(String idNo) {
if (!isValidIdNo(idNo)) {
return GenderEnum.UNKNOWN;
}
if (idNo.length() == OLD_ID_NO_LENGTH) {
//旧身份证号 规则:其中第15位男为单数,女为双数;
int mod = Integer.valueOf(idNo.charAt(OLD_ID_NO_LENGTH - 1)) % 2;
return GenderEnum.getByType(mod);
}
//新身份证号 规则:第十七位奇数分给男性,偶数分给女性。
int mod = Integer.valueOf(idNo.charAt(16)) % 2;
return GenderEnum.getByType(mod);
}
/**
* 根据身份证号获取用户年龄
* @param idNo
* @return
*/
public static int getAge(String idNo) {
if (!isValidIdNo(idNo)) {
return -1;
}
String yearStr;
if (idNo.length() == OLD_ID_NO_LENGTH) {
//旧身份证号 规则:7-12位出生年月日,比如670401代表1967年4月1日
StringBuilder sb = new StringBuilder();
sb.append("19").append(idNo.substring(6, 12));
yearStr = sb.toString();
} else {
//新身份证号
yearStr = idNo.substring(6, 14);
}
LocalDate birthDate = LocalDate.parse(yearStr, DateTimeFormatter.ofPattern(DateUtils.DATE_COMPACT_FORMAT));
LocalDate now = LocalDate.now();
Period period = Period.between(birthDate, now);
return period.getYears();
}
public static boolean isValidIdNo(String idNo) {
if (StringUtils.isNotEmpty(idNo) && (idNo.length() == OLD_ID_NO_LENGTH || idNo.length() == NEW_ID_NO_LENGTH)) {
char first = idNo.charAt(0);
if (first > '0' && first < '7') {
return true;
}
}
return false;
}
}
GenderEnum.java
/**
* @author Ricky Fung
*/
public enum GenderEnum {
FEMALE(0, "女"),
MALE(1, "男"),
UNKNOWN(-1, "未知"),
;
private int type;
private String gender;
GenderEnum(int type, String gender) {
this.type = type;
this.gender = gender;
}
public static GenderEnum getByType(int type) {
for (GenderEnum g : GenderEnum.values()) {
if (g.getType() == type) {
return g;
}
}
return null;
}
public int getType() {
return type;
}
public String getGender() {
return gender;
}
}
身份证号前6位为户籍地行政区划编码
int insertBatch(List<OrderPackage> list);
对应Mapper如下:
<insert id="insertBatch" parameterType="java.util.List" keyProperty="id" useGeneratedKeys="true">
insert into es_shop_order_package (city_distribution_type, express_id, express_sn,
is_city_distribution, member_id,
no_express, order_goods_ids, order_id,
remark, send_time, shop_id
)
values
<foreach collection="list" item="item" index="index" separator=",">
(#{item.cityDistributionType,jdbcType=INTEGER}, #{item.expressId,jdbcType=INTEGER}, #{item.expressSn,jdbcType=VARCHAR},
#{item.isCityDistribution,jdbcType=INTEGER}, #{item.memberId,jdbcType=INTEGER},
#{item.noExpress,jdbcType=INTEGER}, #{item.orderGoodsIds,jdbcType=VARCHAR}, #{item.orderId,jdbcType=INTEGER},
#{item.remark,jdbcType=VARCHAR}, #{item.sendTime,jdbcType=TIMESTAMP}, #{item.shopId,jdbcType=INTEGER})
</foreach>
</insert>
List<OrderPackage> selectByIds(@Param("list") List<Integer> ids);
对应Mapper如下:
<select id="selectByIds" resultMap="BaseResultMap">
select
<include refid="Base_Column_List" />
from es_shop_order_package
where id IN
<foreach collection="list" index="index" item="item" open="(" separator="," close=")">
#{item}
</foreach>
</select>
<update id="updateByIdBatch" parameterType="list">
update es_shop_order
set status = 1
where id in
<foreach collection="list" item="item" index="index" separator="," open="(" close=")">
#{item.id}
</foreach>
</update>
int deleteByPrimaryKey(List<Long> idList);
xml如下:
<!--根据主键删除--> <!--这里必须要是 item="id" -->
<delete id="deleteByPrimaryKey" parameterType="java.util.List">
delete from cms_menu
where id in
<foreach item="id" collection="list" open="(" separator="," close=")">
#{id,jdbcType=BIGINT}
</foreach>
</delete>
List里面的对象元素,以某个属性来分组,例如,以id分组,将id相同的放在一起:
//List 以ID分组 Map<Integer,List<Apple>>
Map<Integer, List<Apple>> groupBy = appleList.stream().collect(Collectors.groupingBy(Apple::getId));
System.err.println("groupBy:"+groupBy);
{1=[Apple{id=1, name='苹果1', money=3.25, num=10}, Apple{id=1, name='苹果2', money=1.35, num=20}], 2=[Apple{id=2, name='香蕉', money=2.89, num=30}], 3=[Apple{id=3, name='荔枝', money=9.99, num=40}]}
id为key,apple对象为value,可以这么做:
/**
* List -> Map
* 需要注意的是:
* toMap 如果集合对象有重复的key,会报错Duplicate key ....
* apple1,apple12的id都为1。
* 可以用 (k1,k2)->k1 来设置,如果有重复的key,则保留key1,舍弃key2
*/
Map<Integer, Apple> appleMap = appleList.stream().collect(Collectors.toMap(Apple::getId, a -> a,(k1,k2)->k1));
package com.mkyong.java8
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
public class TestListMap {
public static void main(String[] args) {
List<Hosting> list = new ArrayList<>();
list.add(new Hosting(1, "liquidweb.com", 80000));
list.add(new Hosting(2, "linode.com", 90000));
list.add(new Hosting(3, "digitalocean.com", 120000));
list.add(new Hosting(4, "aws.amazon.com", 200000));
list.add(new Hosting(5, "mkyong.com", 1));
// key = id, value - websites
Map<Integer, String> result1 = list.stream().collect(
Collectors.toMap(Hosting::getId, Hosting::getName));
System.out.println("Result 1 : " + result1);
// key = name, value - websites
Map<String, Long> result2 = list.stream().collect(
Collectors.toMap(Hosting::getName, Hosting::getWebsites));
System.out.println("Result 2 : " + result2);
// Same with result1, just different syntax
// key = id, value = name
Map<Integer, String> result3 = list.stream().collect(
Collectors.toMap(x -> x.getId(), x -> x.getName()));
System.out.println("Result 3 : " + result3);
}
}
从集合中过滤出来符合条件的元素:
//过滤出符合条件的数据
List<Apple> filterList = appleList.stream().filter(a -> a.getName().equals("香蕉")).collect(Collectors.toList());
System.err.println("filterList:"+filterList);
[Apple{id=2, name='香蕉', money=2.89, num=30}]
将集合中的数据按照某个属性求和:
//计算 总金额
BigDecimal totalMoney = appleList.stream().map(Apple::getMoney).reduce(BigDecimal.ZERO, BigDecimal::add);
System.err.println("totalMoney:"+totalMoney); //totalMoney:17.48
Collectors.maxBy 和 Collectors.minBy 来计算流中的最大或最小值。
Optional<Dish> maxDish = Dish.menu.stream().
collect(Collectors.maxBy(Comparator.comparing(Dish::getCalories)));
maxDish.ifPresent(System.out::println);
Optional<Dish> minDish = Dish.menu.stream().
collect(Collectors.minBy(Comparator.comparing(Dish::getCalories)));
minDish.ifPresent(System.out::println);
1、添加依赖
<properties>
<fastjson.version>1.2.79</fastjson.version>
<jackson.version>2.12.6</jackson.version>
<gson.version>2.8.8</gson.version>
</properties>
<!-- jackson -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
<version>${jackson.version}</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>${jackson.version}</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>${jackson.version}</version>
</dependency>
<!-- gson -->
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>${gson.version}</version>
</dependency>
<!-- fastjson -->
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>${fastjson.version}</version>
</dependency>
示例如下:
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.DeserializationFeature;
import com.fasterxml.jackson.databind.JavaType;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.meituan.quickstart.configcenter.entity.User;
import org.junit.jupiter.api.Test;
import java.lang.reflect.Type;
import java.util.List;
public class JacksonTest {
@Test
public void testGeneric() throws JsonProcessingException {
ObjectMapper objectMapper = new ObjectMapper();
// 排除json字符串中实体类没有的字段
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES,false);
String json = "[{\"name\":\"a\",\"password\":\"345\"},{\"name\":\"b\",\"password\":\"123\"}]";
//第一种方法
TypeReference<List<User>> reference = new TypeReference<List<User>>(){};
List<User> list = objectMapper.readValue(json, reference);
System.out.println(list);
//第二种方法
Type type = reference.getType();
JavaType jType = objectMapper.getTypeFactory().constructType(type);
List<User> list2 = objectMapper.readValue(json, jType);
System.out.println(list2);
//第三种方法
JavaType javaType = objectMapper.getTypeFactory().constructCollectionType(List.class, User.class);
List<User> list3 = objectMapper.readValue(json, javaType);
System.out.println(list3);
}
}
import com.google.gson.Gson;
import com.google.gson.GsonBuilder;
import com.google.gson.reflect.TypeToken;
import com.meituan.quickstart.configcenter.entity.User;
import org.junit.jupiter.api.Test;
import java.lang.reflect.Type;
import java.util.List;
public class GsonTest {
private static final Gson GSON = new GsonBuilder()
.disableHtmlEscaping()
.create();
@Test
public void testGeneric() {
String json = "[{\"name\":\"a\",\"password\":\"345\"},{\"name\":\"b\",\"password\":\"123\"}]";
//第一种方法
Type typeOfT = new TypeToken<List<User>>(){}.getType();
List<User> list = GSON.fromJson(json, typeOfT);
System.out.println(list);
//第二种方法
List<User> list2 = GSON.fromJson(json, TypeToken.getParameterized(List.class, User.class).getType());
System.out.println(list2);
}
}
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.TypeReference;
import com.meituan.quickstart.configcenter.entity.User;
import org.junit.jupiter.api.Test;
import java.lang.reflect.Type;
import java.util.List;
public class FastjsonTest {
@Test
public void testGeneric() {
String json = "[{\"name\":\"a\",\"password\":\"345\"},{\"name\":\"b\",\"password\":\"123\"}]";
//第一种方法
List<User> list = JSON.parseArray(json, User.class);
System.out.println(list);
//第二种方法
List<User> list2 = JSON.parseObject(json, new TypeReference<List<User>>(){});
System.out.println(list2);
//第三种方法
TypeReference<List<User>> typeReference = new TypeReference<List<User>>(){};
Type type = typeReference.getType();
List<User> list3 = JSON.parseObject(json, type);
System.out.println(list3);
}
@Test
public void testSimple() {
String json = "{\"name\":\"b\",\"password\":\"123\"}";
User user = JSON.parseObject(json ,User.class);
System.out.println(user);
}
}
原文链接: How to use MDC with thread pools? : https://stackoverflow.com/questions/6073019/how-to-use-mdc-with-thread-pools
logback 官方说明:
MDC And Managed Threads
A copy of the mapped diagnostic context can not always be inherited by worker threads from the initiating thread. This is the case when java.util.concurrent.Executors is used for thread management. For instance, newCachedThreadPool method creates a ThreadPoolExecutor and like other thread pooling code, it has intricate thread creation logic.
In such cases, it is recommended that MDC.getCopyOfContextMap() is invoked on the original (master) thread before submitting a task to the executor. When the task runs, as its first action, it should invoke MDC.setContextMapValues() to associate the stored copy of the original MDC values with the new Executor managed thread.
链接:http://logback.qos.ch/manual/mdc.html#managedThreads
MDCWrappers.java
/**
* JBoss, Home of Professional Open Source.
* Copyright 2014-2020 Red Hat, Inc., and individual contributors
* as indicated by the @author tags.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.jboss.pnc.common.concurrent;
import org.slf4j.MDC;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Map;
import java.util.concurrent.Callable;
import java.util.function.Consumer;
/**
* @author <a href="mailto:[email protected]">Matej Lazar</a>
*/
public class MDCWrappers {
public static Runnable wrap(final Runnable runnable) {
final Map<String, String> context = MDC.getCopyOfContextMap();
return () -> {
Map previous = MDC.getCopyOfContextMap();
if (context == null) {
MDC.clear();
} else {
MDC.setContextMap(context);
}
try {
runnable.run();
} finally {
if (previous == null) {
MDC.clear();
} else {
MDC.setContextMap(previous);
}
}
};
}
public static <T> Callable<T> wrap(final Callable<T> callable) {
final Map<String, String> context = MDC.getCopyOfContextMap();
return () -> {
Map previous = MDC.getCopyOfContextMap();
if (context == null) {
MDC.clear();
} else {
MDC.setContextMap(context);
}
try {
return callable.call();
} finally {
if (previous == null) {
MDC.clear();
} else {
MDC.setContextMap(previous);
}
}
};
}
public static <T> Consumer<T> wrap(final Consumer<T> consumer) {
final Map<String, String> context = MDC.getCopyOfContextMap();
return (t) -> {
Map previous = MDC.getCopyOfContextMap();
if (context == null) {
MDC.clear();
} else {
MDC.setContextMap(context);
}
try {
consumer.accept(t);
} finally {
if (previous == null) {
MDC.clear();
} else {
MDC.setContextMap(previous);
}
}
};
}
public static <T> Collection<Callable<T>> wrapCollection(Collection<? extends Callable<T>> tasks) {
Collection<Callable<T>> wrapped = new ArrayList<>();
for (Callable<T> task : tasks) {
wrapped.add(wrap(task));
}
return wrapped;
}
}
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.mybatis.spring.boot</groupId>
<artifactId>mybatis-spring-boot-starter</artifactId>
<version>2.1.3</version>
</dependency>
PageHelper:
<dependency>
<groupId>com.github.pagehelper</groupId>
<artifactId>pagehelper-spring-boot-starter</artifactId>
<version>1.3.0</version>
</dependency>
在实际项目运用中,PageHelper的使用非常便利快捷,仅通过PageInfo + PageHelper两个类,就足以完成分页功能,然而往往这种最简单的集成使用方式,却在很多实际应用场景中,没有得到充分的开发利用.
接下来是我们最常见的使用方式:
public PageInfo<ResponseEntityDto> page(RequestParamDto param) {
PageHelper.startPage(param.getPageNum(), param.getPageSize());
List<ResoinseEntityDto> list = mapper.selectManySelective(param);
PageInfo<ResponseEntityDto> pageInfo = (PageInfo<ResponseEntityDto>)list;
return pageInfo;
}
在某种程度上而言,上述写法的确是符合PageHelper的使用规范 :
在集合查询前使用PageHelper.startPage(pageNum,pageSize),并且中间不能穿插执行其他SQL
但是作为Developer的我们,往往只有在追求完美和极致的道路上才能够寻得突破和机遇; 以下是合理且规范的基本使用:
public PageInfo<ResponseEntityDto> page(RequestParamDto param) {
return PageHelper.startPage(param.getPageNum(), param.getPageSize())
.doSelectPageInfo(() -> list(param))
}
public List<ResponseEntityDto> list(RequestParamDto param) {
return mapper.selectManySelective(param);
}
CREATE TABLE `user_info` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) NOT NULL,
`name` varchar(255) DEFAULT '' COMMENT '昵称',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '更新时间',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET= utf8mb4;
当我们创建业务表的时候 通常都需要设置create_time 和 update_time 但是通常需要在代码中设置好时间后再插入数据库
CURRENT_TIMESTAMP 字段设置后 当insert数据时 mysql会自动设置当前系统时间 赋值给该属性字段
ON UPDATE CURRENT_TIMESTAMP 字段设置后 当update数据时 并且 成功发生更改时 mysql会自动设置当前系统时间 赋值给该属性字段:
字段类型可选择:
在5.5到5.6.4版本里,对于DEFAULT CURRENT_TIMESTAMP子句,只能TIMESTAMP类型列上指定。
从5.6.5开始(也包括5.7),DEFAULT CURRENT_TIMESTAMP子句可以指定到TIMESTAMP或者DATETIME类型列
例如:
CREATE TABLE `operation_activity` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键',
`title` varchar(128) NOT NULL COMMENT '活动名称',
`type` smallint(4) NOT NULL COMMENT '活动类型 1:大转盘抽奖 2:老虎机抽奖',
`start_time` datetime NOT NULL COMMENT '开始时间',
`end_time` datetime NOT NULL COMMENT '结束时间',
`config` varchar(1600) DEFAULT NULL COMMENT '活动配置信息',
`state` tinyint(2) NOT NULL COMMENT '状态 1:未发布 2:已发布 3:已下线',
`create_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`update_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '更新时间',
`offline_time` datetime NOT NULL COMMENT '下线时间',
`parent_id` bigint(20) NOT NULL DEFAULT '0' COMMENT '主活动id',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='运营活动配置表'
方式如下:
private void sendRedirect(HttpServletResponse response, String targetUrl) {
response.setHeader("Location", targetUrl); //目标地址
response.setStatus(HttpServletResponse.SC_FOUND); //设置状态码为302
}
代码如下:
private void sendRedirect(HttpServletResponse response, String targetUrl) {
response.sendRedirect(targetUrl);
}
sendRedirect重定向不可以在页面**享HttpServletRequest对象中的数据。但是可以通过重定向的url中携带需要的参数,但这里的参数只能携带字符串的参数。但是其优点是重定向时浏览器默认是使用get请求方式,这样的请求方式速度更快一点。但是安全性却不高。
可以看一下 tomcat 中 HttpServletResponse接口实现类 org.apache.catalina.connector.Response
的 sendRedirect 方法:
/**
* Send a temporary redirect to the specified redirect location URL.
*
* @param location Location URL to redirect to
*
* @exception IllegalStateException if this response has
* already been committed
* @exception IOException if an input/output error occurs
*/
@Override
public void sendRedirect(String location) throws IOException {
sendRedirect(location, SC_FOUND);
}
/**
* Internal method that allows a redirect to be sent with a status other
* than {@link HttpServletResponse#SC_FOUND} (302). No attempt is made to
* validate the status code.
*
* @param location Location URL to redirect to
* @param status HTTP status code that will be sent
* @throws IOException an IO exception occurred
*/
public void sendRedirect(String location, int status) throws IOException {
if (isCommitted()) {
throw new IllegalStateException(sm.getString("coyoteResponse.sendRedirect.ise"));
}
// Ignore any call from an included servlet
if (included) {
return;
}
// Clear any data content that has been buffered
resetBuffer(true);
// Generate a temporary redirect to the specified location
try {
String locationUri;
// Relative redirects require HTTP/1.1
if (getRequest().getCoyoteRequest().getSupportsRelativeRedirects() &&
getContext().getUseRelativeRedirects()) {
locationUri = location;
} else {
locationUri = toAbsolute(location);
}
setStatus(status);
setHeader("Location", locationUri);
if (getContext().getSendRedirectBody()) {
PrintWriter writer = getWriter();
writer.print(sm.getString("coyoteResponse.sendRedirect.note",
Escape.htmlElementContent(locationUri)));
flushBuffer();
}
} catch (IllegalArgumentException e) {
log.warn(sm.getString("response.sendRedirectFail", location), e);
setStatus(SC_NOT_FOUND);
}
// Cause the response to be finished (from the application perspective)
setSuspended(true);
}
作为Servlet的映射url,比如我访问的是 http://localhost:8080/hello,容器会将http://localhost/去掉, 剩下的hello部分拿来做servlet的映射匹配。这个映射匹配过程是有顺序的,而且当有一个servlet匹配成功以后,就不会去理会剩下 的servlet了(filter不同,后文会提到)。其匹配规则和顺序如下:
匹配顺序如下:
在web.xml文件中,以下语法用于定义映射:
所以,为什么定义”/*.action”这样一个看起来很正常的匹配会错?因为这个匹配即属于路径映射,也属于扩展映射,导致容器无法判断。
servlet与filter的url-pattern设置方式:
1、精确匹配:
/directory/file1.jsp
/directory/file2.jsp
/directory/file3.jsp
2、目录匹配:
directory/*
3、扩展匹配:
*.jsp
/
和 /*
之间的区别:
匹配方法只有三种,要么是路径匹配(以“/”字符开头,并以“/”结尾),要么是扩展名匹配(以“.”开头),要么是精确匹配,三种匹配方法不能进行组合,不要想当然使用通配符或正则规则。
如/*.jsp是非法的。
另外注意:/aa//bb是精确匹配,合法,这里的不是通配的含义。
“/”属于路径匹配,并且可以匹配所有request,由于路径匹配的优先级仅次于精确匹配,所以“/”会覆盖所有的扩展名匹配,很多404错误均由此引起,所以这是一种特别恶劣的匹配模式,一般只用于filter的url-pattern
“/”是servlet中特殊的匹配模式,切该模式有且仅有一个实例,优先级最低,不会覆盖其他任何url-pattern,只是会替换servlet容器的内建default servlet ,该模式同样会匹配所有request。
大于等于如下:
<![CDATA[ >= ]]>
小于等于
<![CDATA[ <= ]]>
例如:
<select id="findUserCreditLog" resultMap="BaseResultMap">
select
<include refid="Base_Column_List" />
from esx_member_credit_log
where user_id = #{userId}
<![CDATA[
AND create_time >= #{startTime}
AND create_time <= #{endTime}
]]>
</select>
当遇到多个查询条件,使用where 1=1 可以很方便的解决我们的问题,但是这样很可能会造成非常大的性能损失,因为添加了 “where 1=1 ”的过滤条件之后,数据库系统就无法使用索引等查询优化策略,数据库系统将会被迫对每行数据进行扫描(即全表扫描) 来比较此行是否满足过滤条件,当表中的数据量较大时查询速度会非常慢;此外,还会存在SQL 注入的风险。
原始写法:
select
<include refid="Base_Column_List" />
from student
where 1=1
<if test="name != null and name !=''">
and name like concat('%', #{name}, '%')
</if>
<if test="sex != null">
and sex=#{sex}
</if>
使用where标签后的写法:
select
<include refid="Base_Column_List" />
from student
<where>
<if test="name != null and name !=''">
and name like concat('%', #{name}, '%')
</if>
<if test="sex != null">
and sex=#{sex}
</if>
</where>
where 标签知道只有在一个以上的if条件有值的情况下才去插入“WHERE”子句。而且,若最后的内容是“AND”或“OR”开头的,where 标签 也知道如何将他们去除。
那么会部署tomcat/webapp下的所有项目(或者说文件夹:ROOT,project1,project2 等等)。
例如:
<Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="false" deployOnStartup="false"></Host>
那么对应的请求路径如下:
project1: localhost:8080/project1/...
project1: localhost:8080/project2/...
ROOT: localhost:8080/xxx
会直接带上文件夹名字作为path,特别提示ROOT的优先级高,会省去ROOT这个名字。
那么会将appBase与path拼接, docBase是你文件夹的名字,path是你的访问路径。
例如:
<Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="false" deployOnStartup="false">
<Context path="" docBase="/data/app" reloadable="false"/>
</Host>
那么访问路径就是 localhost:8080/接口uri
。
指定 path="xxx"
例如:
<Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="false" deployOnStartup="false">
<Context path="mvp" docBase="book" debug="0" reloadable="true"/>
</Host>
那么访问路径就是 localhost:8080/mvp/接口uri
。
有多个context,就必须都配上path,这样才能根据不同的项目走不通的路径。
例如:
<Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="false" deployOnStartup="false">
<Context path="aaa" docBase="/data/app/sa" debug="0" reloadable="true"/>
<Context path="bbb" docBase="/data/app/sb" debug="0" reloadable="true"/>
</Host>
sa文件夹对应的请求路径:localhost:8080/aaa/
,sb文件夹对应的请求路径 localost:8080/bbb/
。
autoDeploy="false" deployOnStartup="false"这两个选项都为true的时候(一般默认autoDeploy为true),这时候会自动加载/部署。
本篇介绍 Caffeine — 一个高性能的 Java 缓存库。
缓存和 Map 之间的一个根本区别在于缓存可以回收存储的 item。
回收策略为在指定时间删除哪些对象。此策略直接影响缓存的命中率 — 缓存库的一个重要特征。
Caffeine 因使用 Window TinyLfu 回收策略,提供了一个近乎最佳的命中率。
Caffeine 为我们提供了三种填充策略:手动、同步和异步
maven依赖:
<dependency>
<groupId>com.github.ben-manes.caffeine</groupId>
<artifactId>caffeine</artifactId>
<version>2.8.6</version>
</dependency>
Cache<String, Object> manualCache = Caffeine.newBuilder()
.expireAfterWrite(10, TimeUnit.MINUTES)
.maximumSize(10_000)
.build();
String key = "name1";
// 根据key查询一个缓存,如果没有返回NULL
graph = manualCache.getIfPresent(key);
// 根据Key查询一个缓存,如果没有调用createExpensiveGraph方法,并将返回值保存到缓存。
// 如果该方法返回Null则manualCache.get返回null,如果该方法抛出异常则manualCache.get抛出异常
graph = manualCache.get(key, k -> createExpensiveGraph(k));
// 将一个值放入缓存,如果以前有值就覆盖以前的值
manualCache.put(key, graph);
// 删除一个缓存
manualCache.invalidate(key);
Cache接口允许显式的去控制缓存的检索,更新和删除。
我们可以通过cache.getIfPresent(key) 方法来获取一个key的值,通过cache.put(key, value)方法显示的将数控放入缓存,但是这样子会覆盖缓原来key的数据。更加建议使用cache.get(key,k - > value) 的方式,get 方法将一个参数为 key 的 Function (createExpensiveGraph) 作为参数传入。如果缓存中不存在该键,则调用这个 Function 函数,并将返回值作为该缓存的值插入缓存中。get 方法是以阻塞方式执行调用,即使多个线程同时请求该值也只会调用一次Function方法。这样可以避免与其他线程的写入竞争,这也是为什么使用 get 优于 getIfPresent 的原因。
采用同步方式去获取一个缓存和上面的手动方式是一个原理。在build Cache的时候会提供一个createExpensiveGraph函数。
查询并在缺失的情况下使用同步的方式来构建一个缓存。
LoadingCache<String, Object> loadingCache = Caffeine.newBuilder()
.maximumSize(10_000)
.expireAfterWrite(10, TimeUnit.MINUTES)
.build(key -> createExpensiveGraph(key));
String key = "name1";
Object graph = loadingCache.get(key);
// 获取组key的值返回一个Map
List<String> keys = new ArrayList<>();
keys.add(key);
Map<String, Object> graphs = loadingCache.getAll(keys);
LoadingCache是使用CacheLoader来构建的缓存的值。
批量查找可以使用getAll方法。默认情况下,getAll将会对缓存中没有值的key分别调用CacheLoader.load方法来构建缓存的值。我们可以重写CacheLoader.loadAll方法来提高getAll的效率。
AsyncLoadingCache<String, Object> asyncLoadingCache = Caffeine.newBuilder()
.maximumSize(10_000)
.expireAfterWrite(10, TimeUnit.MINUTES)
// Either: Build with a synchronous computation that is wrapped as asynchronous
.buildAsync(key -> createExpensiveGraph(key));
// Or: Build with a asynchronous computation that returns a future
// .buildAsync((key, executor) -> createExpensiveGraphAsync(key, executor));
String key = "name1";
// 查询并在缺失的情况下使用异步的方式来构建缓存
CompletableFuture<Object> graph = asyncLoadingCache.get(key);
// 查询一组缓存并在缺失的情况下使用异步的方式来构建缓存
List<String> keys = new ArrayList<>();
keys.add(key);
CompletableFuture<Map<String, Object>> graphs = asyncLoadingCache.getAll(keys);
// 异步转同步
loadingCache = asyncLoadingCache.synchronous();
AsyncLoadingCache是继承自LoadingCache类的,异步加载使用Executor去调用方法并返回一个CompletableFuture。异步加载缓存使用了响应式编程模型。
如果要以同步方式调用时,应提供CacheLoader。要以异步表示时,应该提供一个AsyncCacheLoader,并返回一个CompletableFuture。
synchronous()这个方法返回了一个LoadingCacheView视图,LoadingCacheView也继承自LoadingCache。调用该方法后就相当于你将一个异步加载的缓存AsyncLoadingCache转换成了一个同步加载的缓存LoadingCache。
默认使用ForkJoinPool.commonPool()来执行异步线程,但是我们可以通过Caffeine.executor(Executor) 方法来替换线程池。
Caffeine提供三类驱逐策略:基于大小(size-based),基于时间(time-based)和基于引用(reference-based)。
基于大小驱逐,有两种方式:一种是基于缓存大小,一种是基于权重。
// Evict based on the number of entries in the cache
// 根据缓存的计数进行驱逐
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
.maximumSize(10_000)
.build(key -> createExpensiveGraph(key));
// Evict based on the number of vertices in the cache
// 根据缓存的权重来进行驱逐(权重只是用于确定缓存大小,不会用于决定该缓存是否被驱逐)
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
.maximumWeight(10_000)
.weigher((Key key, Graph graph) -> graph.vertices().size())
.build(key -> createExpensiveGraph(key));
我们可以使用Caffeine.maximumSize(long)方法来指定缓存的最大容量。当缓存超出这个容量的时候,会使用Window TinyLfu策略来删除缓存。
我们也可以使用权重的策略来进行驱逐,可以使用Caffeine.weigher(Weigher) 函数来指定权重,使用Caffeine.maximumWeight(long) 函数来指定缓存最大权重值。
注:maximumWeight与maximumSize不可以同时使用。
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
.expireAfterAccess(5, TimeUnit.MINUTES)
.build(key -> createExpensiveGraph(key));
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
.expireAfterWrite(10, TimeUnit.MINUTES)
.build(key -> createExpensiveGraph(key));
Caffeine提供了三种定时驱逐策略:
测试定时驱逐不需要等到时间结束。我们可以使用Ticker接口和Caffeine.ticker(Ticker)方法在缓存生成器中指定时间源,而不必等待系统时钟。
FakeTicker ticker = new FakeTicker(); // Guava's testlib
Cache<Key, Graph> cache = Caffeine.newBuilder()
.expireAfterWrite(10, TimeUnit.MINUTES)
.executor(Runnable::run)
.ticker(ticker::read)
.maximumSize(10)
.build();
cache.put(key, graph);
ticker.advance(30, TimeUnit.MINUTES)
assertThat(cache.getIfPresent(key), is(nullValue());
Cache<Key, Graph> graphs = Caffeine.newBuilder()
.removalListener((Key key, Graph graph, RemovalCause cause) ->
System.out.printf("Key %s was removed (%s)%n", key, cause))
.build();
您可以通过Caffeine.removalListener(RemovalListener) 为缓存指定一个删除侦听器,以便在删除数据时执行某些操作。 RemovalListener可以获取到key、value和RemovalCause(删除的原因)。
删除侦听器的里面的操作是使用Executor来异步执行的。默认执行程序是ForkJoinPool.commonPool(),可以通过Caffeine.executor(Executor)覆盖。当操作必须与删除同步执行时,请改为使用CacheWrite,CacheWrite将在下面说明。
注意:由RemovalListener抛出的任何异常都会被记录(使用Logger)并不会抛出。
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
.maximumSize(10_000)
// 指定在创建缓存或者最近一次更新缓存后经过固定的时间间隔,刷新缓存
.refreshAfterWrite(1, TimeUnit.MINUTES)
.build(key -> createExpensiveGraph(key));
刷新和驱逐是不一样的。刷新的是通过LoadingCache.refresh(key)方法来指定,并通过调用CacheLoader.reload方法来执行,刷新key会异步地为这个key加载新的value,并返回旧的值(如果有的话)。驱逐会阻塞查询操作直到驱逐作完成才会进行其他操作。
与expireAfterWrite不同的是,refreshAfterWrite将在查询数据的时候判断该数据是不是符合查询条件,如果符合条件该缓存就会去执行刷新操作。例如,您可以在同一个缓存中同时指定refreshAfterWrite和expireAfterWrite,只有当数据具备刷新条件的时候才会去刷新数据,不会盲目去执行刷新操作。如果数据在刷新后就一直没有被再次查询,那么该数据也会过期。
刷新操作是使用Executor异步执行的。默认执行程序是ForkJoinPool.commonPool(),可以通过Caffeine.executor(Executor)覆盖。
如果刷新时引发异常,则使用log记录日志,并不会抛出。
Cache<Key, Graph> graphs = Caffeine.newBuilder()
.maximumSize(10_000)
.recordStats()
.build();
使用Caffeine.recordStats(),您可以打开统计信息收集。Cache.stats() 方法返回提供统计信息的CacheStats,如:
首先是pom.xml
<dependency>
<groupId>org.apache.velocity</groupId>
<artifactId>velocity</artifactId>
<version>1.7</version>
</dependency>
VelocityEngine ve = new VelocityEngine();
ve.setProperty(RuntimeConstants.RESOURCE_LOADER, "classpath");
ve.setProperty("classpath.resource.loader.class", ClasspathResourceLoader.class.getName());
ve.init();
VelocityContext context = new VelocityContext();
context.put("date", getMyTimestampFunction());
Template t = ve.getTemplate( "templates/email_html_new.vm" );
StringWriter writer = new StringWriter();
t.merge( context, writer );
封装VelocityUtils:
package io.mindflow.agent.util;
import org.apache.velocity.Template;
import org.apache.velocity.VelocityContext;
import org.apache.velocity.app.VelocityEngine;
import org.apache.velocity.runtime.RuntimeConstants;
import org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader;
import java.io.File;
import java.io.FileWriter;
import java.io.StringWriter;
/**
* @author Ricky Fung
*/
public abstract class VelocityUtils {
static VelocityEngine ve;
static {
ve = new VelocityEngine();
ve.setProperty(RuntimeConstants.RESOURCE_LOADER, "classpath");
ve.setProperty("classpath.resource.loader.class", ClasspathResourceLoader.class.getName());
ve.init();
}
public static void generateHtml(String inputVmFilePath, String outputHtmlFilePath,
VelocityContext context) throws Exception {
try {
Template template = ve.getTemplate(inputVmFilePath, "UTF-8");
File outputFile = new File(outputHtmlFilePath);
FileWriter writer = new FileWriter(outputFile);
template.merge(context, writer);
writer.close();
} catch (Exception ex) {
throw ex;
}
}
public static String generateHtml(String inputVmFilePath, VelocityContext context) throws Exception {
try {
Template template = ve.getTemplate(inputVmFilePath, "UTF-8");
StringWriter writer = new StringWriter(4096);
template.merge(context, writer);
String data = writer.toString();
writer.close();
return data;
} catch (Exception ex) {
throw ex;
}
}
}
使用:
/**
* @author Ricky Fung
*/
public class TemplatePdfTest {
@Test
public void testGetHtml() throws Exception {
List<Map<String, String>> list = new ArrayList<>();
for (int i=0; i<50; i++) {
Map<String, String> map = new HashMap<>(8);
map.put("productName", "爱盈宝21000"+i);
map.put("lockPeriod", String.format("2018年3月至2020年%s月", i+1));
map.put("profitRate", "8.6%");
map.put("dueState", i%2 == 0? "是":"否");
list.add(map);
}
Map map = new HashMap(4);
map.put("list", list);
VelocityContext context = new VelocityContext(map);
//VelocityUtils.generateHtml("templates/report.vm", "my.html", context);
String html = VelocityUtils.generateHtml("templates/report.vm", context);
System.out.println(html);
//
File file = new File(String.format("test_%s.pdf", DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss").format(LocalDateTime.now())));
FileOutputStream fos = new FileOutputStream(file);
PdfUtils.createPDF(fos, html);
}
}
1.官网下载:https://www.vandyke.com/cgi-bin/releases.php?product=securecrt
2.官网下载需要填写一些注册信息,如果不想填写就在我提供的百度云下载:https://pan.baidu.com/s/1KGMT1Qq5pw3no5DZT2DYvg 提取码: hnzi 这个里面包含破解文件
finder -> 应用程序 右键,显示包内容 到这个目录 Contents/MacOS/ 替换里面的SecureCRT文件(如果下载下来的破解文件不是执行文件,需要先对文件进行授权,能用CRT的应该都懂吧,终端 cd到文件位置 :sudo chmod +x SecureCRT )
替换完文件,也授权执行之后,下面就开始打开你的应用程序,第一步直接continue
下面的就按照我提供的信息进行填空,SecureFX的破解也是一样的道理,我就不多说了,直接贴破解代码
===============================================================
这个是SecureCRT的破解代码
Name: ygeR
Company: TEAM ZWT
Serial Number: 03-97-347580
License Key:AC5WAH JEXA9H ABVZSS Y1Y32N AAD3EG DF29WD MGTCSM 1EXZ8T
Issue Date:11-12-2018
===============================================================
这个个是SecureFX的破解代码
Name: ygeR
Company: TEAM ZWT
Serial Number: 06-64-250319
License Key: ABW86K JKJZ1X CF34MM TB77KW ADNBSY FNGPR3 SA6QAZ ZJM6SH
Issue Date:11-12-2018
===============================================================
来源:MAC版SecureCRT+SecureFX破解,版本8.5.1
金融类数字绝大多数都是使用java.math.BigDecimal
,这里封装了一个DecimalUtils工具类,方便运算。
import java.math.BigDecimal;
/**
* @author Ricky Fung
*/
public abstract class DecimalUtils {
private static final int DEFAULT_SCALE = 8;
private static final int ZERO = 0;
//=======基本类型
public static BigDecimal valueOf(double num) {
return new BigDecimal(Double.toString(num));
}
public static BigDecimal valueOf(float num) {
return new BigDecimal(Float.toString(num));
}
public static BigDecimal valueOf(int num) {
return new BigDecimal(Integer.toString(num));
}
public static BigDecimal valueOf(long num) {
return new BigDecimal(Long.toString(num));
}
//=======包装类型
public static BigDecimal valueOf(Double num) {
return new BigDecimal(num.toString());
}
public static BigDecimal valueOf(Float num) {
return new BigDecimal(num.toString());
}
public static BigDecimal valueOf(Integer num) {
return new BigDecimal(num.toString());
}
public static BigDecimal valueOf(Long num) {
return new BigDecimal(num.toString());
}
public static BigDecimal valueOf(String val) {
return new BigDecimal(val);
}
//=======格式化为字符串
public static String format(BigDecimal bd) {
return bd.setScale(2, BigDecimal.ROUND_HALF_UP).toString();
}
public static String format(BigDecimal bd, int scale) {
return bd.setScale(scale, BigDecimal.ROUND_HALF_UP).toString();
}
public static String format(BigDecimal bd, int scale, int roundingMode) {
return bd.setScale(scale, roundingMode).toString();
}
//=======设置精度
public static BigDecimal setScale(BigDecimal bd, int scale) {
return bd.setScale(scale, BigDecimal.ROUND_HALF_UP);
}
/**
*
* @param bd
* @param scale
* @param roundingMode 取值参考 BigDecimal.ROUND_HALF_UP 等
* @return
*/
public static BigDecimal setScale(BigDecimal bd, int scale, int roundingMode) {
return bd.setScale(scale, roundingMode);
}
//=======求最小值
public static int min(int num1, int num2) {
return num1 < num2 ? num1 : num2;
}
public static int min(int num1, int num2, int num3) {
int min = num1 < num2 ? num1 : num2;
return min < num3 ? min : num3;
}
public static long min(long num1, long num2) {
return num1 < num2 ? num1 : num2;
}
public static long min(long num1, long num2, long num3) {
long min = num1 < num2 ? num1 : num2;
return min < num3 ? min : num3;
}
public static BigDecimal min(BigDecimal num1, BigDecimal num2) {
return num1.compareTo(num2) < ZERO ? num1 : num2;
}
public static BigDecimal min(BigDecimal num1, BigDecimal num2, BigDecimal num3) {
BigDecimal min = num1.compareTo(num2) < ZERO ? num1 : num2;
return min.compareTo(num3) < ZERO ? min : num3;
}
//=======求最大值
public static int max(int num1, int num2) {
return num1 > num2 ? num1 : num2;
}
public static int max(int num1, int num2, int num3) {
int max = num1 > num2 ? num1 : num2;
return max > num3 ? max : num3;
}
public static long max(long num1, long num2) {
return num1 > num2 ? num1 : num2;
}
public static long max(long num1, long num2, long num3) {
long max = num1 > num2 ? num1 : num2;
return max > num3 ? max : num3;
}
public static BigDecimal max(BigDecimal num1, BigDecimal num2) {
return num1.compareTo(num2) > ZERO ? num1 : num2;
}
public static BigDecimal max(BigDecimal num1, BigDecimal num2, BigDecimal num3) {
BigDecimal max = num1.compareTo(num2) > ZERO ? num1 : num2;
return max.compareTo(num3) > ZERO ? max : num3;
}
//=======加法
public static BigDecimal add(int v1, int v2) {
BigDecimal b1 = new BigDecimal(Integer.toString(v1));
BigDecimal b2 = new BigDecimal(Integer.toString(v2));
return b1.add(b2);
}
public static BigDecimal add(long v1, long v2) {
BigDecimal b1 = new BigDecimal(Long.toString(v1));
BigDecimal b2 = new BigDecimal(Long.toString(v2));
return b1.add(b2);
}
public static BigDecimal add(double v1, double v2) {
BigDecimal b1 = new BigDecimal(Double.toString(v1));
BigDecimal b2 = new BigDecimal(Double.toString(v2));
return b1.add(b2);
}
public static BigDecimal add(BigDecimal b1, BigDecimal b2) {
return b1.add(b2);
}
//=======减法
public static BigDecimal sub(int v1, int v2) {
BigDecimal b1 = new BigDecimal(Integer.toString(v1));
BigDecimal b2 = new BigDecimal(Integer.toString(v2));
return b1.subtract(b2);
}
public static BigDecimal sub(long v1, long v2) {
BigDecimal b1 = new BigDecimal(Long.toString(v1));
BigDecimal b2 = new BigDecimal(Long.toString(v2));
return b1.subtract(b2);
}
public static BigDecimal sub(double v1, double v2) {
BigDecimal b1 = new BigDecimal(Double.toString(v1));
BigDecimal b2 = new BigDecimal(Double.toString(v2));
return b1.subtract(b2);
}
public static BigDecimal sub(BigDecimal b1, BigDecimal b2) {
return b1.subtract(b2);
}
//=======乘法
public static BigDecimal mul(int v1, int v2) {
BigDecimal b1 = new BigDecimal(Integer.toString(v1));
BigDecimal b2 = new BigDecimal(Integer.toString(v2));
return b1.multiply(b2);
}
public static BigDecimal mul(long v1, long v2) {
BigDecimal b1 = new BigDecimal(Long.toString(v1));
BigDecimal b2 = new BigDecimal(Long.toString(v2));
return b1.multiply(b2);
}
public static BigDecimal mul(double v1, double v2) {
BigDecimal b1 = new BigDecimal(Double.toString(v1));
BigDecimal b2 = new BigDecimal(Double.toString(v2));
return b1.multiply(b2);
}
public static BigDecimal mul(BigDecimal b1, BigDecimal b2) {
return b1.multiply(b2);
}
//=======除法
public static BigDecimal div(int v1, int v2) {
BigDecimal b1 = new BigDecimal(Integer.toString(v1));
BigDecimal b2 = new BigDecimal(Integer.toString(v2));
return b1.divide(b2, DEFAULT_SCALE, BigDecimal.ROUND_HALF_UP);//四舍五入
}
public static BigDecimal div(long v1, long v2) {
BigDecimal b1 = new BigDecimal(Long.toString(v1));
BigDecimal b2 = new BigDecimal(Long.toString(v2));
return b1.divide(b2, DEFAULT_SCALE, BigDecimal.ROUND_HALF_UP);//四舍五入,保留两位小数
}
public static BigDecimal div(double v1, double v2) {
BigDecimal b1 = new BigDecimal(Double.toString(v1));
BigDecimal b2 = new BigDecimal(Double.toString(v2));
return b1.divide(b2, DEFAULT_SCALE, BigDecimal.ROUND_HALF_UP);//四舍五入
}
public static BigDecimal div(BigDecimal b1, BigDecimal b2) {
return b1.divide(b2, DEFAULT_SCALE, BigDecimal.ROUND_HALF_UP);//四舍五入
}
/**
* 除法
* @param v1
* @param v2
* @param scale
* @param roundingMode 参考 BigDecimal.ROUND_HALF_UP
* @return
*/
public static BigDecimal div(double v1, double v2, int scale, int roundingMode) {
BigDecimal b1 = new BigDecimal(Double.toString(v1));
BigDecimal b2 = new BigDecimal(Double.toString(v2));
return b1.divide(b2, scale, roundingMode);
}
public static BigDecimal div(BigDecimal b1, BigDecimal b2, int scale) {
return b1.divide(b2, scale, BigDecimal.ROUND_HALF_UP);//四舍五入
}
public static BigDecimal div(BigDecimal b1, BigDecimal b2, int scale, int roundingMode) {
return b1.divide(b2, scale, roundingMode);
}
}
AESUtils.java
import org.apache.commons.codec.binary.Base64;
import javax.crypto.Cipher;
import javax.crypto.SecretKey;
import javax.crypto.spec.SecretKeySpec;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.util.Arrays;
/**
* AES 加密/解密
* @author Ricky Fung
*/
public abstract class AESUtils {
private static final Charset UTF_8 = StandardCharsets.UTF_8;
private static String ALGORITHM_AES = "AES";
private static final int SECRET_KEY_LENGTH = 16;
//密钥(不要修改此属性值,否则后果不堪设想)
private static final String DEFAULT_SECRET_KEY = "DEV_2020_FUCKING_COV";
public static String encrypt(String plainText) {
return encrypt(plainText, DEFAULT_SECRET_KEY);
}
/**
* AES加密
* @param plainText
* @param secret
* @return
*/
public static String encrypt(String plainText, String secret) {
try {
//1.产生key
SecretKey secretKey = genSecretKey(secret);
//2.加密
Cipher cipher = Cipher.getInstance(ALGORITHM_AES);
cipher.init(Cipher.ENCRYPT_MODE, secretKey);
byte[] data = plainText.getBytes(UTF_8);
byte[] result = cipher.doFinal(data);
return base64EncodeStr(result);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public static String decrypt(String cipherText) {
return decrypt(cipherText, DEFAULT_SECRET_KEY);
}
/**
* AES解密
* @param cipherText
* @param secret
* @return
*/
public static String decrypt(String cipherText, String secret) {
try {
//1.产生key
SecretKey secretKey = genSecretKey(secret);
//2.加密
Cipher cipher = Cipher.getInstance(ALGORITHM_AES);
cipher.init(Cipher.DECRYPT_MODE, secretKey);
byte[] c = base64Decode(cipherText);
byte[] buf = cipher.doFinal(c);
return new String(buf, UTF_8);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
//---------
private static SecretKey genSecretKey(String myKey) {
try {
byte[] key = myKey.getBytes(UTF_8);
MessageDigest sha = MessageDigest.getInstance("SHA-1");
key = sha.digest(key);
key = Arrays.copyOf(key, SECRET_KEY_LENGTH);
return new SecretKeySpec(key, ALGORITHM_AES);
} catch (NoSuchAlgorithmException e) {
throw new IllegalArgumentException("不支持的算法");
}
}
private static byte[] base64Decode(String cipherText) {
return Base64.decodeBase64(cipherText);
}
//----
private static String base64EncodeStr(byte[] buf) {
byte[] data = Base64.encodeBase64(buf);
return new String(data, UTF_8);
}
private static byte[] base64Encode(byte[] buf) {
return Base64.encodeBase64(buf);
}
}
使用@value注入string 很普通,那如何注入复杂类型 例如 map、List呢?
@Value("${config.list.ids: 1,2,3}")
private List<String> idList;
多个元素用逗号分隔即可。
或者也可以这样注入:
@Value("${config.list.ids: 1,2,3}")
private String[] idList;
有些场景下如果想自己指定分隔符,可以这样做:
@Value("#{'${topic.list}'.split(',')}")
private List<String> topicList;
properties文件中配置如下:
topic.list: topic1,topic2,topic3
使用方式如下:
@Value("#{${config.maps}}")
private Map<String,String> maps;
properties文件中配置如下:
config.maps: "{key1: 'value1', key2: 'value2'}"
注意:一定要用"把map所对应的value包起来,要不然解析会失败,导致不能转成 Map<String,String>。
spring 不支持把值注入到静态变量中,如:
@Value("${ES.CLUSTER_NAME}")
private static String CLUSTER_NAME;
你会发现在方法中获得的CLUSTER_NAME 是 null。
解决办法:
spring支持set方法注入,我们可以利用非静态setter 方法注入静态变量。如:
private static String CLUSTER_NAME;
@Value("${ES.CLUSTER_NAME}")
public void setClusterName(String clusterName) {
CLUSTER_NAME = clusterName;
}
@value必须修饰在方法上,且set方法不能有static 这样就能注入了。
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.