2019-08-08
骗分导论

Read More

2019-08-08
中小学作文稿

Read More

2019-08-07
A Stateless Hardware-based Transport in Data Centers

Hardware-based transports, such as RDMA, are becoming prevalent because of its low latency, high throughput and low CPU overhead. However, current RDMA NICs have limited NIC memory to store per-flow transport states. When the number of flows exceed memory capacity, the NIC needs to swap out flow states to host memory via PCIe, leading to performance degradation.

This paper presents a hardware-based transport without per-flow state. At its core, flow state bounces between the two end hosts along with a data packet, analagous to a thread whose state is always in-flight. To enable multiple in-flight packets, each thread is assigned a distinct sequence of packets to send. We enable each thread to fork, throttle and merge independently, which effectively simulates a window-based congestion control mechanism. For loss recovery, we design an epoch-based single loss detector for all flows, which enables selective retransmission and the storage size is proportional to the number of lost packets in a round trip. When there are more losses than the NIC can handle, the receiver CPU is notified to recover loss.

We design and implement RDMA, TCP and TLS transports without per-flow states in an FPGA prototype. The transports have small network bandwidth and CPU overhead. Simulations and testbed experiments show that flows share network bandwidth fairly in a multi-bottleneck network, and solves the incast problem even better than DCTCP and DCQCN. With a large number of concurrent flows, the throughput of our stateless hardware-based TLS transport is 100x of a stateful hardware-based transport and 50x of a software-based transport.

Read More

2018-11-28
AI 财经社采访:丹棱街 5 号魔法学院

(AI 财经社微信公众号《丹棱街 5 号魔法学院》采访稿。由于版权问题,仅摘录含自己的部分。全文请关注 “AI 财经社” 微信公众号。)

1

李博杰是研究院与中科大联合培养的博士实习生。他可能是为数不多从小就和研究院有“渊源”的人。初中时他就鼓捣奥数、上计算机培训班,还曾在“开复学生网”里提问:“以后计算机的发展会是什么样的?”李开复在论坛里的回复他已经记不清了,但那段隔空对话让屏幕前的小小少年激动又开心,觉得收到了来自远方的鼓励。

后来,他研究过微软亚洲研究院的这几位院长,发现他们都是卡内基梅隆大学培养出来的,还几乎是同样的专业,师从比较接近的导师。“这就给我一个启发,一定要到像微软研究院这样一个比较好的环境做学术,而且在其中积累的人脉也会给我的未来发展提供很大帮助。”

在来研究院之前,李博杰也有过出国深造的想法,但从国内申请卡内基梅隆这样的计算机牛校太难了。很多人挤破头皮去世界前50的大学,但在他看来,“除了TOP20外,我们MSRA比其他排名20-50的学校,研究水平要高一些。”

2

李博杰与导师张霖涛在研究项目的选择上,偶尔会产生冲突,有时候是李博杰被说服,有时候他也会坚持自己的观点,深入研究后把导师说服。导师张霖涛会对李博杰说:“虽然我有很多经验,但不一定什么都懂,细节的突破口还要靠你自己来思考,我们再交流,再反馈。”

3

而在与记者的对话中可以看到,他们的眼睛,是从聊起“研究为何物”时,开始发光的。

李博杰记得,当初在研究院面试时聊专业问题,他都有备而来。大学时因为对计算机太感兴趣,他从数学系转到计算机专业,还和同学用一种创新的技术,方便学校几千人更好地上网。但在面试的最后,导师张永光问他:“网络研究是什么样的?”李博杰一时愣在原地不知道怎么回答,他以前常在学校倒腾服务器,所以想象网络研究就是扒扒网线,配一配IP地址,做些苦力活。

在研究院5年的实习中,李博杰慢慢悟出一些东西。“做研究和做工程不一样。做研究是要创造别人没有的东西,而且可能做出来的,一时半会儿看不到用途,要耐得住寂寞,关键是有没有想着为人类创造知识的态度。”

研究的过程中也充斥着许多哲学思考。导师张霖涛最有名的一个“哲学”叫“30年理论”。他观察发现,很多技术,如现在大火的神经网络都经历了一个30年的曲折——先有一个萌芽,然后“死掉”,之后经过一二十年的低谷期又活了。为什么会是30年?

“当年条件不成熟,导师们被这个东西都折腾死过,所以他们会对自己的博士生说,这东西不靠谱。但30年基本上就是一代人,等到导师们都老了,新人把老的Idea重拾起来,有可能机会正好就来了”。

李博杰是个眉毛粗粗的大男生,聊起研究时很兴奋,会不自觉地摆动手。中途他突然毫无征兆地站起来,拿起黑笔在一块白板上画出一个象限图,X轴是创新,Y轴是实用,顺时针方向依次为“巴斯德、爱因斯坦、学术垃圾、爱迪生”——这是研究院霍强导师曾讲过的“哲学”,他表情认真地说:“巴斯德是做细菌研究的,后来搞了免疫学,创新又实用,我未来要在导师指导下往这个研究方向走。”

研究院没有KPI,但以影响力论英雄。为了做出有影响力的研究,研究员们需要一直不停地拷问自己:这个研究是不是就这么一点?如果就这么一点为什么还要做它?这个方向的极限在哪里?我们能不能突破这个极限?这个工作的Big Picture是什么样的?能不能延伸到很多问题上去?如果能,你就有可能定义一个新的方向,让更多人过来参与你的研究,使你的工作影响力更大。

但对于刚开始没有任何科研经验的实习生来说,这个目标还太遥远了。最直接的体现是,论文初稿病句连天,逻辑不连贯,简直像车祸现场。到系统组当实习生后,李博杰的第一篇国际论文从框架构思、做实验,到最后的成文,几乎是导师张霖涛手把手带着做出来的。“比起学校,MSRA很好的一点是,有非常资深的导师带着做第一篇文章。”李博杰说。

4

不止微软亚洲研究院,在如今人工智能落入行业的早期阶段,无论是BAT还是创业公司的AI团队,也都先将自己的科学家和算法工程师们输送到工厂,找到企业的痛点和人工智能的价值点,联合探索。

在这种转型的推动下,研究院内部开始发生变化。在李博杰眼中,研究院不再是一个养老院一样的学术机构,更像一家随时流动着的互联网公司。他看到,在那些大牛离开研究院的同时,也有来自学术界、互联网公司和创业公司的新鲜血液在不断加入研究院。秦涛所在的机器学习组里,一位曾经出去创业的研究员最近又回到研究院,负责对外的项目合作。

“研究院的人一出去都在各大公司当高管,就会有新闻出来,但研究院也一直在招一些有潜力的新人,只是这不会在社会上引发特别大的反响。”李博杰说。

5

在研究院的五年间,李博杰觉得自己的视野变得越来越开阔。第一年他只是想完成导师交给的任务,把代码调通了。AI热潮到来后,他开始思考怎么把论文在现有基础上提高,研究成果到底会不会在行业有影响力。到今年,他开始站在行业的角度,去思考自己做的系统到底能应用到哪些具体场景中,有哪些共性的问题值得研究。

李博杰的两篇论文已经被两个顶级会议接收,其中一个研究成果能将系统性能提升10倍,用他的话来说:假如把春运火车票抢票的操作视为一次键值访问,运用他的研究成果,就能实现全国人民每人每秒抢票一次。看着自己的研究成果能在100万台机器上部署,这在五年前是完全不敢想象的事情。他还是想要留在这里继续做研究,而他也刚闯过研究院校招面试的第一关。

Read More

2018-04-09
MP-RDMA: Multi-Path Transport for RDMA in Datacenters

RDMA is becoming prevalent because of its low latency, high throughput and low CPU overhead. However, current RDMA remains a single path transport which is prone to failures and falls short to utilize the rich parallel paths in datacenters. Unlike previous multipath approaches, which mainly focus on TCP, this paper presents a multi-path transport for RDMA, i.e. MP-RDMA, which efficiently utilizes the rich network paths in datacenters.

MP-RDMA employs three novel techniques to address the challenge of limited RDMA NICs on-chip memory size: 1) a multi-path ACK-clocking mechanism to distribute traffic in a congestion-aware manner without incurring per-path states; 2) an out-oforder aware path selection mechanism to control the level of out-of-order delivered packets, thus minimizes the meta data required to them; 3) a synchronise mechanism to ensure in-order memory update whenever needed.

With all these techniques, MP-RDMA only adds 66B to each connection state compared to single-path RDMA. Our evaluation with an FPGA-based prototype demonstrates that compared with single-path RDMA, MP-RDMA can significantly improve the robustness under failures (2x~4x higher throughput under 0.5%∼10% link loss ratio) and improve the overall network utilization by up to 47%.

Read More

2018-02-16
AI 科技评论:新年快乐

(转载自 AI科技评论 微信公众号,感谢崔天一同学采访)

Read More

2018-02-14
情人节表白:你喜欢 MSRA 哪一点

(转自 微软研究院AI头条 微信公众号,感谢贝贝姐采访)

Read More

2018-01-03
蜗牛说:从挂科少年到微软奖学金获得者

(转载自 中国科大研究生会 微信公众号,感谢朱意星等同学的采访)

如果要用一个词给他开篇的话,
那就是
折腾。

Read More

2018-01-01
FTLinux: Transparent and Efficient Fault Tolerance for Distributed Applications

Fault tolerance is critical for distributed applications. Many request serving and batch processing frameworks have been proposed to simplify programming of fault tolerant distributed systems, which basically ask the programmers to separate states from computation and store states in a fault-tolerant system. However, many existing applications (e.g. Node.js, Memcached and Python in Tensorflow) do not support fault tolerance, and fault tolerant systems are often slower than their non-fault-tolerant counterparts. In this work, we take up the challenge of achieving transparent and efficient fault tolerance for general distributed applications. Challenges include process migration, deterministic replay and distributed snapshot.

Read More

2018-01-01
ReactDB: Fast and Real-Time Analytical Database

Analytical database queries are critical to support business decisions. Because these queries involve complicated computation over a large corpus of data, their execution typically takes minutes to hours. When information in the database is updated, the user needs to re-execute the query on the current snapshot of database, which again takes a long time and the result reflects a stale snapshot. In this rapidly changing world, business intelligence should react to information updates in real-time.

To this end, we design ReactDB, a new database with fast analytical queries and reactive to database updates.

ReactDB is reactive in two ways. First, cached analytical queries are reactive to updates in the database. We observe that many analytical queries are repetitive. So we cache intermediate results of frequent analytical queries. When data updates, the cached results and ongoing transactions are updated incrementally in real-time. This enables cached queries to complete immediately. The user may even subscribe to an analytical query and receive an updated query result whenever the database updates.

Second, in ReactDB, physical data layout and indexes are reactive to data access pattern. Different queries need different physical data layouts and indexes for efficient access. Traditionally, they need to be manually tuned by the DBA, which may be suboptimal for certain workloads.

Read More
RSS