Python中如何估算程序运行时间?

本人准备学习 python 和机器学习,刚刚搭建好环境,从书上抄了一段程序试验一下,无奈运行了两天还没出结果。CPU 占用率一直接近 100%。请各位帮忙看一下是我的程序有问题呢还是真的没运行完?大概需要多少时间?我的配置是 E5-2650,8 核 16 线程,主频好像是 2.0G ,8G 内存。
程序如下:
# Load libraries
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_digits
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import validation_curve

# Load data
digits = load_digits()

# Create feature matrix and target vector
features, target = digits.data, digits.target

# Create range of values for parameter
#param_range = np.arange(1, 250, 2)
param_range = np.arange(1, 250, 25)

# Calculate accuracy on training and test set using range of parameter values
train_scores, test_scores = validation_curve(
# Classifier
RandomForestClassifier(),
# Feature matrix
features,
# Target vector
target,
# Hyperparameter to examine
param_name=“n_estimators”,
# Range of hyperparameter’s values
param_range=param_range,
# Number of folds
cv=3,
# Performance metric
scoring=“accuracy”,
# Use all computer cores
n_jobs=-1)

# Calculate mean and standard deviation for training set scores
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)

# Calculate mean and standard deviation for test set scores
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)

# Plot mean accuracy scores for training and test sets
plt.plot(param_range, train_mean, label=“Training score”, color=“black”)
plt.plot(param_range, test_mean, label=“Cross-validation score”,
color=“dimgrey”)

# Plot accurancy bands for training and test sets
plt.fill_between(param_range, train_mean - train_std,
train_mean + train_std, color=“gray”)
plt.fill_between(param_range, test_mean - test_std,
test_mean + test_std, color=“gainsboro”)

# Create plot
plt.title(“Validation Curve With Random Forest”)
plt.xlabel(“Number Of Trees”)
plt.ylabel(“Accuracy Score”)
plt.tight_layout()

plt.legend(loc=“best”)
plt.show()

程序实际上就一个函数 validation_curve,就是看一下不同的超参数对随机森林决策精度的影响。
Python中如何估算程序运行时间?


6 回复

正好我有环境,帮你试了下,,源代码一点没动,2 秒出结果。。。。。。你是不是 python 环境有问题?


time模块的time()perf_counter()来测,简单直接。

基础用法:

import time

start = time.time()  # 或者用 time.perf_counter() 精度更高
# 你的代码放这里
result = sum(range(1000000))
end = time.time()

print(f"耗时: {end - start:.4f} 秒")

更专业的做法(用上下文管理器):

import time
from contextlib import contextmanager

@contextmanager
def timer(desc="代码块"):
    start = time.perf_counter()
    yield
    end = time.perf_counter()
    print(f"{desc} 耗时: {end - start:.6f} 秒")

# 使用示例
with timer("计算求和"):
    total = sum(range(1000000))

需要更详细分析时用timeit模块:

import timeit

code_to_test = """
total = sum(range(1000000))
"""

execution_time = timeit.timeit(stmt=code_to_test, number=100)
print(f"执行100次平均耗时: {execution_time/100:.6f} 秒")

简单说就是:快速测试用time.time(),要精度用time.perf_counter(),重复测试用timeit

贴出我的环境,供你参考

pd.show_versions()

INSTALLED VERSIONS
------------------
commit: None
python: 3.6.5.final.0
python-bits: 64
OS: Linux
OS-release: 4.14.42-1-MANJARO
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: zh_CN.UTF-8
LOCALE: zh_CN.UTF-8

pandas: 0.23.0
pytest: 3.5.1
pip: 10.0.1
setuptools: 39.2.0
Cython: 0.28.2
numpy: 1.14.3
scipy: 1.1.0
pyarrow: None
xarray: None
IPython: 6.4.0
sphinx: 1.7.4
patsy: 0.5.0
dateutil: 2.7.3
pytz: 2018.4
blosc: None
bottleneck: 1.2.1
tables: 3.4.3
numexpr: 2.6.5
feather: None
matplotlib: 2.2.2
openpyxl: 2.5.3
xlrd: 1.1.0
xlwt: 1.3.0
xlsxwriter: 1.0.4
lxml: 4.2.1
bs4: 4.6.0
html5lib: 1.0.1
sqlalchemy: 1.2.7
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None

多谢 princelai,我找找原因。

macbook pro 2015 8G,也是秒出结果

load_digits 是否会下载数据集呢

回到顶部