#!/bin/bash
set -o nounset
declare -A dic
#This script is used for backup logs to oss!
#Time 2017/11/12
#Author guoyunlong gongxiaopo
###############Warning#################################
# The path to the required migration log server is:/root/yunwei/ #
# We need to configure this machine root to Xitong users of the log server.It only needs to be modified: [backup log path]="Number of days retained" #
# Backup log:/app/logs/yunwei/qianyi.log #
# The script can be executed before or after 5 o'clock every day, 14 days later.It is ver...
a = [i for i in range(1,18) if iConditions
python2There may be memory problems, which have been resolved in python3.
d = [(i,j) for i in range(3) for j in range(2)]
Be equal to
d = []
for i in range(3):
for j in range(2):
d.append(i,j)
I. Introduction of ORM
Mapping relations:
Name of table
Attributes of fields
Table record – — class instantiated object
ORMThe two major functions are:
Operating table:
- Create a table
- Modified table
- Delete table
Operation data row:
- crud
ORMLink database with pymysql third party tools
DjangoThere is no way to help us create the database. We can only tell it after we have created it, and let Django link it.
Two. Preparation before creating the table.
First, create your own database.
Two, configure ...
js get the maximum value of the array minimum max, min
Click to view the content of the article.
Array sorting to get the first and last minimum values.
MathThe function is Math.min (), Math.max (), and the array length is too long. Maximum call stack size exceeded
Traverse the array, define the variable a equal to the first value, traverse the array with each element, each time the maximum assignment to a; traverse the completion of a is the maximum value. The minimum value is the same.
hadoop fs -mkdir Create HDFS directory
Hadoop fs -ls List HDFS directory
hadoop fs -copyFromLocal Copy local files to HDFS
hadoop fs -put Using put to copy files to HDFS, if files already exist, they will be directly covered.
hadoop fs -cat List file contents in HDFS directory
hadoop fs -copyToLocal Copy files on HDFS to local
hadoop fs -get Use get to copy files on HDFS to local
hadoop fs -cp Copy HDFS file
hadoop fs -rm Delete HDFS file
hadoop fs -rm -R Delete HDFS directory
Enter the master boot command:
start-all.sh
Write on Dao implementation layer or manager level.
default List<BbwBarrage> findByEnable(int enable) {
return this.findAll(new Specification<BbwBarrage>() {@Overridepublic Predicate toPredicate(Root<BbwBarrage> root, CriteriaQuery<?> query, CriteriaBuilder criteriaBuilder) {Predicate predicate;// TODO Auto-generated method stubPath enableP = root.get(“enable”);
predicate = criteriaBuilder.equal(enableP, enable);
query.where(predicate);
return predicate;}});}
Directly in the controller layer
List<T> lists = xxManager.finaAll(xx);
Paging query...
Start MySQL instance error report, view error log
## error message2018-08-31T10:38:36.945081Z 0 [ERROR] InnoDB: The Auto-extending innodb_system data file ‘./ibdata1’ is of a different size 768 pages (rounded down to MB) than specified in the .cnf file: initial 1536 pages, max 0 (relevant if non-zero) pages!
Settlement process:
768/64=12
View the my.cnf file willinnodb_data_file_path = ibdata1:24M:autoextend
Change to
innodb_data_file_path = ibdata1:12M:autoextend
Start again!
1 #include<iostream>
2 #include<cstdio>
3 #include<cstring>
4 using namespace std;
5 const int maxn=2e5+7;
6 const int maxc=57;
7 int n,k,p,ret=0;
8 int col[maxn],f[maxn],cst[maxn],minn[maxc],before[maxc],sum[maxc],ans[maxc];
9 int main(){
10 memset(minn,2147483647,sizeof(minn));
11 cin>>n>>k>>p;
12 for(int i=1;i<=n;i++) cin>>col[i]>>cst[i];
13 for(int i=1;i<=n;i++){
14 for(int j=0;j<k;j++)
15 minn[j]=min(minn[j],cst[i]);
16 if(minn[col[i]]<=p){
17 f[i]=sum[col[i]];
18...