前几天利用python爬取了我爱我家的租房的一些数据,就想着能不能对房租进行一波分析,于是通过书籍和博客等查阅了相关资料,进行了房租的区间分析。不得不说,用python做区间分析比我之前用sql关键字统计区间简单多了,话不多说,上代码
# coding=utf-8import pandas as pdimport pymysqlimport matplotlib.pyplot as pltdb = pymysql.connect(host="127.0.0.1", port=3306, user="root", passwd="root", db="woaiwojia", charset='utf8')cursor = db.cursor()df = pd.read_sql("select * from zufang ", db)#以下注释为对pandas读取数据之后的数据处理读取的尝试#前三行#rows = df[0:3] #price和lxrphone两列#cols = df[['price', 'lxrphone']]#aa = pd.DataFrame(df)#前三行和lxrphone和price列# print(df.ix[0:3,['price','lxrphone']])#读取数据的信息# print(df.info())#查看表的描述性信息# print(df.describe())#以下为获取price列的最大最小值并分组xse = df['price']# print(xse.max())# print(xse.min())fanwei = list(range(1500, xse.max(), 1500))fenzu = pd.cut(xse.values, fanwei, right=False) # 分组区间,长度91# print(fenzu.codes)#标签# print(fenzu.categories)#分组区间,长度8pinshu = fenzu.value_counts() # series,区间-个数#print(pinshu)# print(pinshu.index)#设置plot的展示格式pinshu.plot(kind='bar')qujian = pd.cut(xse, fanwei, right=False)df['区间'] = qujian.valuesdf.groupby('区间').median()df.groupby('区间').mean()pinshu_df = pd.DataFrame(pinshu, columns=['频数'])pinshu_df['频率f'] = pinshu_df / pinshu_df['频数'].sum()pinshu_df['频率%'] = pinshu_df['频率f'].map(lambda x: '%.2f%%' % (x * 100))pinshu_df['累计频率f'] = pinshu_df['频率f'].cumsum()pinshu_df['累计频率%'] = pinshu_df['累计频率f'].map(lambda x: '%.4f%%' % (x * 100))print(pinshu_df)plt.show()
打印的结果
使用matplotlib.pyplot的show方法展示的数据
参考博客
参考书籍《Python3爬虫、数据清洗与可视化实战》