使用cProfile进行性能分析与优化实践

  |  

摘要: cProfile 实践笔记

【对算法,数学,计算机感兴趣的同学,欢迎关注我哈,阅读更多原创文章】
我的网站:潮汐朝夕的生活实验室
我的公众号:算法题刷刷
我的知乎:潮汐朝夕
我的github:FennelDumplings
我的leetcode:FennelDumplings


在文章Python性能分析基础中,我们学习了性能分析的基础知识方法论,如果把性能分析方法整合到开发过程中,就可以帮助我们提高产品的开发质量。然后在文章 Python性能分析器 — cProfile 中,我们进一步学习了 cProfile 这个性能分析器。

本文我们使用性能分析器 cProfile 进行一些实践。


$1 cProfile 实践 — 优化斐波那契函数

原始代码

回顾未优化的斐波那契函数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
import cProfile

def fib(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fib(n - 1) + fib(n - 2)

def fib_seq(n):
seq = []
if n > 0:
seq.extend(fib_seq(n - 1))
seq.append(fib(n))
return seq

cProfile.run("fib_seq(30)")

输出如下

1
2
3
4
5
6
7
8
9
10
11
12
         7049218 function calls (96 primitive calls) in 1.364 seconds

Ordered by: standard name

ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 1.364 1.364 <string>:1(<module>)
31/1 0.000 0.000 1.364 1.364 fib_test.py:11(fib_seq)
7049123/31 1.364 0.000 1.364 0.044 fib_test.py:3(fib)
1 0.000 0.000 1.364 1.364 {built-in method builtins.exec}
31 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
30 0.000 0.000 0.000 0.000 {method 'extend' of 'list' objects}
  • 1.364 秒内有 7049218 个函数调用
  • 一共只有 96 个原生调用
  • 在代码 fib_test.py 第三行,一共有 7049123 - 31 个递归调用

优化1: 返回值缓存

给 fib 函数加一个装饰器,缓存之前计算的值。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
import cProfile

class cached:
def __init__(self, fn):
self.fn = fn
self.cache = {}

def __call__(self, *args):
try:
return self.cache[args]
except KeyError:
self.cache[args] = self.fn(*args)
return self.cache[args]

@cached
def fib(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fib(n - 1) + fib(n - 2)

def fib_seq(n):
seq = []
if n > 0:
seq.extend(fib_seq(n - 1))
seq.append(fib(n))
return seq

cProfile.run("fib_seq(35)")

输出如下

1
2
3
4
5
6
7
8
9
10
11
12
13
      215 function calls (127 primitive calls) in 0.000 seconds

Ordered by: standard name

ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 <string>:1(<module>)
31 0.000 0.000 0.000 0.000 fib_test.py:15(fib)
31/1 0.000 0.000 0.000 0.000 fib_test.py:24(fib_seq)
89/31 0.000 0.000 0.000 0.000 fib_test.py:8(__call__)
1 0.000 0.000 0.000 0.000 {built-in method builtins.exec}
31 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
30 0.000 0.000 0.000 0.000 {method 'extend' of 'list' objects}

函数调用次数从 7049218 降到 215,运行时间从 1.364 秒变为几乎为零。

优化2: 递归改迭代

为了测试将多个调用合成一组对 fib(1000) 的优化效果,首先将 fib 函数的实现从递归改为迭代。如果是递归的话,fib(1000) 就溢出了。

fib 改为迭代实现后的代码如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import profile

def fib(n):
a, b = 0, 1
for i in range(0, n):
a, b = b, a + b
return a

def fib_seq(n):
seq = []
for i in range(0, n + 1):
seq.append(fib(i))
return seq

cProfile.run("fib_seq(1000)")

下面测试 fib 改为迭代实现后,求 5 次 fib_seq(1000)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import cProfile
import pstats

from fib2 import fib, fib_seq

filenames = []
profiler = cProfile.Profile()
profiler.enable()
for i in range(5):
fib_seq(1000)
profiler.create_stats()
stats = pstats.Stats(profiler)
stats.strip_dirs().sort_stats("cumulative").print_stats()
stats.print_callers()

输出结果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
         10017 function calls in 0.101 seconds

Ordered by: cumulative time

ncalls tottime percall cumtime percall filename:lineno(function)
5 0.001 0.000 0.101 0.020 fib2.py:9(fib_seq)
5005 0.099 0.000 0.099 0.000 fib2.py:3(fib)
5005 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects}
1 0.000 0.000 0.000 0.000 cProfile.py:50(create_stats)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}


Ordered by: cumulative time

Function was called by...
ncalls tottime cumtime
fib2.py:9(fib_seq) <-
fib2.py:3(fib) <- 5005 0.099 0.099 fib2.py:9(fib_seq)
{method 'append' of 'list' objects} <- 5005 0.000 0.000 fib2.py:9(fib_seq)
cProfile.py:50(create_stats) <-
{method 'disable' of '_lsprof.Profiler' objects} <- 1 0.000 0.000 cProfile.py:50(create_stats)

0.101 秒计算了 1000 个斐波那契数 5 次。

在此基础上把优化1 中的返回值缓存加上,缓存 fib(n) 的结果,可以进一步优化。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import cProfile

class cached:
def __init__(self, fn):
self.fn = fn
self.cache = {}

def __call__(self, *args):
try:
return self.cache[args]
except KeyError:
self.cache[args] = self.fn(*args)
return self.cache[args]

@cached
def fib(n):
a, b = 0, 1
for i in range(0, n):
a, b = b, a + b
return a

def fib_seq(n):
seq = []
for i in range(0, n + 1):
seq.append(fib(i))
return seq

输出结果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
         11018 function calls in 0.034 seconds

Ordered by: cumulative time

ncalls tottime percall cumtime percall filename:lineno(function)
5 0.002 0.000 0.034 0.007 fib2.py:23(fib_seq)
5005 0.001 0.000 0.032 0.000 fib2.py:8(__call__)
1001 0.030 0.000 0.030 0.000 fib2.py:16(fib)
5005 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects}
1 0.000 0.000 0.000 0.000 cProfile.py:50(create_stats)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}


Ordered by: cumulative time

Function was called by...
ncalls tottime cumtime
fib2.py:23(fib_seq) <-
fib2.py:8(__call__) <- 5005 0.001 0.032 fib2.py:23(fib_seq)
fib2.py:16(fib) <- 1001 0.030 0.030 fib2.py:8(__call__)
{method 'append' of 'list' objects} <- 5005 0.000 0.000 fib2.py:23(fib_seq)
cProfile.py:50(create_stats) <-
{method 'disable' of '_lsprof.Profiler' objects} <- 1 0.000 0.000 cProfile.py:50(create_stats)

$2 cProfile 实践 — csv数据统计

通过公开的 Twitter 数据,统计下面的信息

  • 第一列的标识值为 “0” 的比例
  • 时间在六月份的比例
  • 推文中有 @ 人的比例

我们重点关注CVS文件解析,并做一些基本计算。我们不用任何第三方模块,这样,我们就可以完全控制代码和分析的内容了。

首先我们看以下数据,也就是下面的代码中的 ./files/tweets.csv,此文件大小 228M,1599710行。

1
2
3
4
5
6
7
8
9
10
"0","1686133317","Sun May 03 03:56:12 PDT 2009","NO_QUERY","bgubbles","just beat peter in bowlin... but then he won "
"0","1686133440","Sun May 03 03:56:15 PDT 2009","NO_QUERY","Brook_K220","Off to work again "
"0","1686133767","Sun May 03 03:56:20 PDT 2009","NO_QUERY","lewishudson01","@LucasCruikshank I feel sorry for you "
"0","1686133923","Sun May 03 03:56:24 PDT 2009","NO_QUERY","CartiTarti","Trust my dad to divert the traffic. He always has to get involved http://twitpic.com/4h1sn"
"0","1686134211","Sun May 03 03:56:30 PDT 2009","NO_QUERY","vonnavon314","...one minor over look caused me to miss my mother _ at least I spoke to her & can go back to sleep!!! : D"
"0","1686134298","Sun May 03 03:56:32 PDT 2009","NO_QUERY","glenna_boo","life got me restless. i guess its all about the mistakes. even the HUGE ones. "
"0","1686134469","Sun May 03 03:56:36 PDT 2009","NO_QUERY","clarebailey","is poorly sick "
"0","1686134561","Sun May 03 03:56:38 PDT 2009","NO_QUERY","kelchua","Temperatures to be taken twice daily and exams no longer held in the hall.. Back to the stuffy classrooms everyone.. "
"0","1686134662","Sun May 03 03:56:40 PDT 2009","NO_QUERY","AlteriaMotive","The suns gone and looks like rain "
"0","1686134812","Sun May 03 03:56:43 PDT 2009","NO_QUERY","kirwoodd","@nwjerseyliz good luck with that. I had to switch to satellite radio to find something good. "

在这样的数据下,求解上面的三个问题具体需要记录的内容如下:

1
2
3
第一列的取值为 0 的条数
第三列的月份词为 Jun 的条数
第六列中含有 @ 关键字的条数

原始代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
def build_twit_stats():
stats_file = './files/tweets.csv'
state = {
'0': 0,
'with_at': 0,
'June': 0,
'lines_parts': [],
'total': 0
}
read_data(state, stats_file)
get_stats(state)
print_results(state)

def get_percentage(n, total):
return (n * 100) / total

def get_line_part(line):
line_parts = line.strip().split("\",\"")
return line_parts

def read_data(state, source):
f = open(source, 'r')

# 用 \"\n\" 分割各行,每行的第一个 " 和最后一个 " 已经去掉了
# 后续用 \",\" 分割行内字段即可
# 第一行的第一个 " 和最后一行最后一个 " 需要单独处理
lines = f.read().strip().split("\"\n\"")
lines[0] = lines[0][1:]
lines[-1] = lines[-1][:-1]

for line in lines:
state['lines_parts'].append(get_line_part(line))
state['total'] = len(lines)

def inc_stat(state, st):
state[st] += 1

def get_stats(state):
for i in state['lines_parts']:
if(i[0] == "0"):
inc_stat(state, '0')
if(i[5].find('@') > -1):
inc_stat(state, 'with_at')
if(i[2].find('Jun') > -1):
inc_stat(state, 'June')

def print_results(state):
print("-------- My twitter stats -------------")
print("{}% of tweets({}) which first col are 0".format(get_percentage(state['0'], state['total']), state['0']))
print("{}% of tweets({}) have @".format(get_percentage(state['with_at'], state['total']), state['with_at']))
print("{}% of tweets({}) were made in June".format(get_percentage(state['June'], state['total']), state['June']))


import cProfile
import pstats

profiler = cProfile.Profile()
profiler.enable()

build_twit_stats()

profiler.create_stats()
stats = pstats.Stats(profiler)
stats.strip_dirs().sort_stats('cumulative').print_stats()

统计信息打印结果

1
2
3
4
-------- My twitter stats -------------
49.99093585712411% of tweets(799710) which first col are 0
46.68227366210125% of tweets(746781) have @
57.73596464359165% of tweets(923608) were made in June

cProfile 打印结果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
       12068386 function calls in 5.449 seconds

Ordered by: cumulative time

ncalls tottime percall cumtime percall filename:lineno(function)
1 0.062 0.062 5.449 5.449 csv1.py:1(build_twit_stats)
1 0.355 0.355 4.221 4.221 csv1.py:21(read_data)
1599710 0.387 0.000 3.137 0.000 csv1.py:17(get_line_part)
1599711 2.872 0.000 2.872 0.000 {method 'split' of 'str' objects}
1 0.667 0.667 1.167 1.167 csv1.py:38(get_stats)
3199420 0.285 0.000 0.285 0.000 {method 'find' of 'str' objects}
1599711 0.284 0.000 0.284 0.000 {method 'strip' of 'str' objects}
1 0.103 0.103 0.236 0.236 {method 'read' of '_io.TextIOWrapper' objects}
2470099 0.215 0.000 0.215 0.000 csv1.py:35(inc_stat)
1 0.000 0.000 0.133 0.133 codecs.py:318(decode)
1 0.133 0.133 0.133 0.133 {built-in method _codecs.utf_8_decode}
1599710 0.086 0.000 0.086 0.000 {method 'append' of 'list' objects}
1 0.000 0.000 0.000 0.000 csv1.py:47(print_results)
4 0.000 0.000 0.000 0.000 {built-in method builtins.print}
3 0.000 0.000 0.000 0.000 {method 'format' of 'str' objects}
1 0.000 0.000 0.000 0.000 {built-in method io.open}
1 0.000 0.000 0.000 0.000 cProfile.py:50(create_stats)
1 0.000 0.000 0.000 0.000 _bootlocale.py:23(getpreferredencoding)
3 0.000 0.000 0.000 0.000 csv1.py:14(get_percentage)
1 0.000 0.000 0.000 0.000 codecs.py:308(__init__)
1 0.000 0.000 0.000 0.000 {built-in method _locale.nl_langinfo}
1 0.000 0.000 0.000 0.000 codecs.py:259(__init__)
1 0.000 0.000 0.000 0.000 {built-in method builtins.len}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}

在以上信息中注意这三点

(1) 程序的总执行时间
(2) 不同函数累计的调用次数
(3) 每个函数的总调用次数

  • build_twit_stats 消耗了最多时间,但由于它只是调用其它函数,因此应该关注耗时第二多的函数
  • 耗时第二多的函数为 read_data,也就是说性能瓶颈不是计算统计数据,而是读取文件
  • 第三耗时为 get_line_part,第四耗时为 split,并且 get_line_part 的耗时大部分是 split 贡献的。我们可以清楚地看到read_data 函数的瓶颈。我们使用了太多 split 命令,它们的时间累加了。
  • 第五耗时为 get_stats

带着这些问题找解决方案。

改进1: 逐行处理

原始代码中我们首先把数据加载到内存中,然后重复地遍历文件计算统计数据。我们可以改成逐行读取文件,然后每读一行统计一次。

1
2
3
4
5
6
def read_data(state, source):
f = open(source, 'r')

for line in f:
state['lines_parts'].append(get_line_part(line))
state['total'] = len(state['lines_parts'])
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
       12126719 function calls in 5.030 seconds

Ordered by: cumulative time

ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 5.030 5.030 csv2.py:1(build_twit_stats)
1 0.695 0.695 3.850 3.850 csv2.py:23(read_data)
1599710 0.767 0.000 3.002 0.000 csv2.py:17(get_line_part)
1599710 2.047 0.000 2.047 0.000 {method 'split' of 'str' objects}
1 0.669 0.669 1.179 1.179 csv2.py:33(get_stats)
3199420 0.287 0.000 0.287 0.000 {method 'find' of 'str' objects}
2470099 0.223 0.000 0.223 0.000 csv2.py:30(inc_stat)
1599710 0.188 0.000 0.188 0.000 {method 'strip' of 'str' objects}
1599710 0.091 0.000 0.091 0.000 {method 'append' of 'list' objects}
29169 0.017 0.000 0.062 0.000 codecs.py:318(decode)
29169 0.045 0.000 0.045 0.000 {built-in method _codecs.utf_8_decode}
1 0.000 0.000 0.000 0.000 csv2.py:42(print_results)
4 0.000 0.000 0.000 0.000 {built-in method builtins.print}
1 0.000 0.000 0.000 0.000 {built-in method io.open}
3 0.000 0.000 0.000 0.000 {method 'format' of 'str' objects}
1 0.000 0.000 0.000 0.000 cProfile.py:50(create_stats)
1 0.000 0.000 0.000 0.000 _bootlocale.py:23(getpreferredencoding)
3 0.000 0.000 0.000 0.000 csv2.py:14(get_percentage)
1 0.000 0.000 0.000 0.000 codecs.py:308(__init__)
1 0.000 0.000 0.000 0.000 {built-in method _locale.nl_langinfo}
1 0.000 0.000 0.000 0.000 {built-in method builtins.len}
1 0.000 0.000 0.000 0.000 codecs.py:259(__init__)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}

改进2: 去掉 get_line_part 函数的调用

修改 read_data 并去掉 get_line_part

1
2
3
4
5
6
7
8
9
def read_data(state, source):
f = open(source, 'r')

for line in f:
line_parts = line.strip().split("\",\"")
line_parts[0] = line_parts[0][1:]
line_parts[-1] = line_parts[-1][:-1]
state['lines_parts'].append(line_parts)
state['total'] = len(state['lines_parts'])
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
       10527009 function calls in 4.821 seconds

Ordered by: cumulative time

ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 4.821 4.821 csv2.py:1(build_twit_stats)
1 1.208 1.208 3.623 3.623 csv2.py:17(read_data)
1599710 2.080 0.000 2.080 0.000 {method 'split' of 'str' objects}
1 0.693 0.693 1.197 1.197 csv2.py:30(get_stats)
3199420 0.285 0.000 0.285 0.000 {method 'find' of 'str' objects}
2470099 0.219 0.000 0.219 0.000 csv2.py:27(inc_stat)
1599710 0.192 0.000 0.192 0.000 {method 'strip' of 'str' objects}
1599710 0.084 0.000 0.084 0.000 {method 'append' of 'list' objects}
29169 0.016 0.000 0.059 0.000 codecs.py:318(decode)
29169 0.043 0.000 0.043 0.000 {built-in method _codecs.utf_8_decode}
1 0.000 0.000 0.000 0.000 csv2.py:39(print_results)
4 0.000 0.000 0.000 0.000 {built-in method builtins.print}
1 0.000 0.000 0.000 0.000 {built-in method io.open}
3 0.000 0.000 0.000 0.000 {method 'format' of 'str' objects}
1 0.000 0.000 0.000 0.000 cProfile.py:50(create_stats)
1 0.000 0.000 0.000 0.000 _bootlocale.py:23(getpreferredencoding)
3 0.000 0.000 0.000 0.000 csv2.py:14(get_percentage)
1 0.000 0.000 0.000 0.000 codecs.py:308(__init__)
1 0.000 0.000 0.000 0.000 {built-in method _locale.nl_langinfo}
1 0.000 0.000 0.000 0.000 {built-in method builtins.len}
1 0.000 0.000 0.000 0.000 codecs.py:259(__init__)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}

改进3: 去掉 inc_stat 函数的调用

修改 get_stats 并去掉 inc_stat

1
2
3
4
5
6
7
8
def get_stats(state):
for i in state['lines_parts']:
if(i[0] == "0"):
state["0"] += 1
if(i[5].find('@') > -1):
state["with_at"] += 1
if(i[2].find('Jun') > -1):
state["June"] += 1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
       8056910 function calls in 4.325 seconds

Ordered by: cumulative time

ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 4.325 4.325 csv2.py:1(build_twit_stats)
1 1.195 1.195 3.556 3.556 csv2.py:17(read_data)
1599710 2.039 0.000 2.039 0.000 {method 'split' of 'str' objects}
1 0.498 0.498 0.770 0.770 csv2.py:27(get_stats)
3199420 0.272 0.000 0.272 0.000 {method 'find' of 'str' objects}
1599710 0.181 0.000 0.181 0.000 {method 'strip' of 'str' objects}
1599710 0.083 0.000 0.083 0.000 {method 'append' of 'list' objects}
29169 0.015 0.000 0.058 0.000 codecs.py:318(decode)
29169 0.043 0.000 0.043 0.000 {built-in method _codecs.utf_8_decode}
1 0.000 0.000 0.000 0.000 csv2.py:36(print_results)
4 0.000 0.000 0.000 0.000 {built-in method builtins.print}
1 0.000 0.000 0.000 0.000 {built-in method io.open}
3 0.000 0.000 0.000 0.000 {method 'format' of 'str' objects}
1 0.000 0.000 0.000 0.000 cProfile.py:50(create_stats)
1 0.000 0.000 0.000 0.000 _bootlocale.py:23(getpreferredencoding)
3 0.000 0.000 0.000 0.000 csv2.py:14(get_percentage)
1 0.000 0.000 0.000 0.000 codecs.py:308(__init__)
1 0.000 0.000 0.000 0.000 {built-in method _locale.nl_langinfo}
1 0.000 0.000 0.000 0.000 {built-in method builtins.len}
1 0.000 0.000 0.000 0.000 codecs.py:259(__init__)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}

改进4: 利用in操作符替换find方法。

修改 get_stats,将 find 改为 in

1
2
3
4
5
6
7
8
def get_stats(state):
for i in state['lines_parts']:
if("0" in i[0]):
state["0"] += 1
if("@" in i[5]):
state["with_at"] += 1
if("Jun" in i[2]):
state["June"] += 1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
       4857490 function calls in 3.874 seconds

Ordered by: cumulative time

ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 3.874 3.874 csv2.py:1(build_twit_stats)
1 1.213 1.213 3.578 3.578 csv2.py:17(read_data)
1599710 2.031 0.000 2.031 0.000 {method 'split' of 'str' objects}
1 0.295 0.295 0.295 0.295 csv2.py:27(get_stats)
1599710 0.188 0.000 0.188 0.000 {method 'strip' of 'str' objects}
1599710 0.086 0.000 0.086 0.000 {method 'append' of 'list' objects}
29169 0.017 0.000 0.061 0.000 codecs.py:318(decode)
29169 0.044 0.000 0.044 0.000 {built-in method _codecs.utf_8_decode}
1 0.000 0.000 0.000 0.000 csv2.py:36(print_results)
4 0.000 0.000 0.000 0.000 {built-in method builtins.print}
1 0.000 0.000 0.000 0.000 {built-in method io.open}
3 0.000 0.000 0.000 0.000 {method 'format' of 'str' objects}
1 0.000 0.000 0.000 0.000 cProfile.py:50(create_stats)
1 0.000 0.000 0.000 0.000 _bootlocale.py:23(getpreferredencoding)
3 0.000 0.000 0.000 0.000 csv2.py:14(get_percentage)
1 0.000 0.000 0.000 0.000 codecs.py:308(__init__)
1 0.000 0.000 0.000 0.000 {built-in method _locale.nl_langinfo}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
1 0.000 0.000 0.000 0.000 {built-in method builtins.len}
1 0.000 0.000 0.000 0.000 codecs.py:259(__init__)

增加了改进1 ~ 改进4,耗时从 5.449 降到了 3.874。


Share