打造人工智障初步

最近做了一个sofasofa社区上的一个给ML新手入门用的数据竞赛:识别一张带噪声的图片中画的是圆形还是方形。 训练集中共有\(4000\)个灰度图像,预测集中有\(3550\)个灰度图像。每个图像中都会含有大量的噪点。 图像的分辨率为\(40\times40\),也就是\(40\times40\)的矩阵,每个矩阵以行向量的形式被存放在train.csv和test.csv中。train.csv和test.csv中每行数据代表一个图像,也就是说每行都有\(1600\)个特征。     Read more
tigertang's avatar
tigertang Mar 05, 2018

Note on probability and statistics

一些定义 Variance \[ Var(X)=D(X)=E((X-E(X))^2)=E(X^2)-E^2(X) \] Covariance \[ Cov(X,Y)=E((X-E(X))(Y-E(Y)))=E(XY)-E(X)(Y) \] 特别地 \[ Cov(X,X)=Var(X) \]     Read more
tigertang's avatar
tigertang Dec 01, 2017

Shenyang Regional Adventure

Aftermath队出征啦! 写在前面 去沈阳赛区是我出的主意。然而我是带了私心的,因为有一个熟识的同学在东北大学,本来想趁着区域赛可以突然去看她。但是到了沈阳之后才发现她并不在区域赛所在的校区,所以只好作罢。不过这并不是重点,重点是清华不参加沈阳赛区的比赛,所以我真正的考虑是在沈阳应该会少一点现场压力,可以一定程度上提升游戏体验(不过还是会被其他学校打爆xD)。     Read more
tigertang's avatar
tigertang Oct 30, 2017

Learn Tarjan the hard way

写在前面的前面 这是一篇不严格的搬砖文。为了不引起不必要的误解,本文中描述算法的具体记号与Tarjan原文保持一致。注意:本文中部分引理和原文证明方法略有区别。 约定与记号 \(G=(\vartheta,\varepsilon )\)为有向图 路径\(p:v\overset{*}{\Rightarrow}w\)是从\(v\)到\(w\)的一条有向简单路径(这里简单路径指顶点序列中每个点都是不同的) 由\(G\)通过dfs产生的生成树记为\(T\) \((v,w)\)是\(T\)上的一条边记为\(v\rightarrow w\) 在\(T\)上有一条从\(v\)到\(w\)的路径记为\(v\overset{*}{\rightarrow}w\) \(T_{v}\)是以\(v\)为根的\(T\)的包含\(v\)的所有子孙的子树(当然也包括\(v\)) 本文中所有提到的点的标号都是对应点的dfs序     Read more
tigertang's avatar
tigertang Sep 13, 2017

KMP是什么?

Prelude prefix[S][l]是\(S\)串的长度为\(l\)的前缀 suffix[S][l]是\(S\)串的长度为\(l\)的后缀 孔乙己:你知道next array的四种写法吗? next[i]表示什么? \(nx[i]=max(l)\),其中\(l\)满足\(prefix[S][l]=suffix[prefix[S][i]][l]\)并且\(l<i\) 那怎么写呢?容我三思…… 不如先从网上抄个板子一套,反正题目过了就行QAQ     Read more
tigertang's avatar
tigertang Sep 05, 2017

Number Theory Review

\[ axiom:\\ \left.\begin{matrix} g|x\\ g|y\end{matrix}\right\}\Rightarrow \left\{\begin{matrix} g|x\pm y\\ g|xy \end{matrix}\right.\\ \left.\begin{matrix} g|x\\ h|x\\ gcd(g,h)=1 \end{matrix}\right\}\Rightarrow gh|x \]     Read more
tigertang's avatar
tigertang Aug 25, 2017

Properties of Entropy

Shannon’s theory Theorem: \[ H(X,Y)\leq H(X)+H(Y)\\ let\ H(X)+H(Y)-H(X,Y)\\ =\sum_{x_{i}}Pr(x_{i})lb(\frac{1}{Pr(x_{i})})+\sum_{y_{j}}Pr(y_{j})lb(\frac{1}{Pr(y_{j})}-H(X,Y))\\ =\sum_{x_{i}}lb(\frac{1}{Pr(x_{i})})\sum_{y_{j}}Pr(x_{i},y_{j})+\sum_{y_{j}}lb(\frac{1}{P(y_{j})})\sum_{x_{i}}Pr(x_{i},y_{j})-H(X,Y)\\ =\sum_{x_{i},y_{j}}Pr(x_{i},y_{j})lb(\frac{1}{Pr(x_{i})Pr(y_{j})})-\sum_{x_{i},y_{j}}Pr(x{i},y_{j})lb(\frac{1}{Pr(x_{i},y_{j})})\\ =\sum_{x_{i},y_{j}}Pr(x_{i},y_{j})lb(\frac{Pr(x_{i},y_{j})}{Pr(x_{i})Pr(y_{j})})\\ =-\sum_{x_{i},y_{j}}Pr(x_{i},y_{j})lb(\frac{Pr(x_{i})Pr(y_{j})}{Pr(x_{i},y_{j})})\\ \]     Read more
tigertang's avatar
tigertang Apr 02, 2017

Good Will Hunting

As is illustrated in the Chinese edition title(心灵捕手), Sean not only hunts for the good will as a senior psychology professor but also undoes the knot in both Will’s and his heart. After reviewing the movie the third time, some fantastic lines and dialogues linger in my mind for such a long time that I chewed them over and over without languor. Definitely Will’s marvelous talents in history, mathematics, literature and even organic chemistry can never be denied. He can defeat MIT professors in mathematics proof without any tough endeavor. He has the capability to quote large quantities of authentic literature to argue with Harvard students and ridicule them with the sentence “The education you got with fifteen thousand is accessible to those pay one point five to go to the public library”.     Read more
tigertang's avatar
tigertang Mar 28, 2017

FFT

快速傅里叶变换 (Fast Fourier Transform), 即利用计算机计算离散傅里叶变换(DFT)的高效、快速计算方法的统称,简称FFT。 在算法竞赛中,FFT主要用于解决卷积的加速与优化,最典型的例子是求两个多项式的乘积。 譬如说 \[f(x)=a_{0}x^{0}+a_{1}x^{1}+...+a_{n-1}x^{n-1}\] \[g(x)=b_{0}x^{0}+b_{1}x^{1}+...+b_{n-1}x^{n-1}\] 求\(f(x)g(x)\) 如果用传统思路去解决这个问题,显然再怎么优化似乎只能到\(\Theta(n^{2})\)的复杂度。因为不管如何考虑,枚举偏序对\((a_{i},b_{j})\)的过程似乎都没有任何的冗余。如果我们不能跳出以上思维的局限,我们就不可能在根本上降低算法的时间复杂度。 让我们首先换一个角度来考察多项式。     Read more
tigertang's avatar
tigertang Feb 20, 2017

莫比乌斯反演

定理:设\(F(n)\)和\(f(n)\)是定义在非负整数集合上的两个函数,并且满足条件\(F(n)=\sum_{d|n}f(d)\),则我们得到结论\(f(n)=\sum_{d|n}\mu (d)F(\frac{n}{d})\)。 先介绍一下\(\mu(d)\)函数: \[ \mu(d)=\left\{\begin{matrix}1&d=1\\ (-1)^{k}&d=p_{1}p_{2}...p_{k}\\\ 0&otherwise \end{matrix}\right. \]     Read more
tigertang's avatar
tigertang Feb 13, 2017