2023年12月9日发(作者:四季的问候)
IPFS(三)源码解读之-add
Add 所作的事其实就是将文件传到IPFS上,通过块的方式存到本地blockstore中。
在ipfs的安装目录的blocks目录下保存了当前本地节点所存储的所有的块数据,具体有没有对数据加密,我也没有仔细去看
Ps:我去看了一下,并没有加密,原文存储,这里需要批评一下…
首先,add的入口在core/commands/文件,这是一个命令行工具,主要作用是提供交互以及命令的一下定义和相关配置对应的不
同功能的解析,这里真的只是解析,然后保存到AddedObject这个对象中,这个对象的作用就是当上传是的关于文件的一下配置信息,和
一些操作。
type AddedObject struct {
Name string
Hash string `json:",omitempty"`
Bytes int64 `json:",omitempty"`
Size string `json:",omitempty"`
VID string `json:",omitempty"`
VersionInfo *nInfo
}
然后,通过下面这种看起来很奇怪的方式去上传文件的数据量,具体后面的块生成部分我们不去探讨,这里只看是怎么从本地读取到节点,
并将这个流送入块中的
其实下面做的事非常简单,先定义好addAllAndPin这个方法,这个方法最主要的作用就是对文件路径进行遍历,也就是我们在命令行输入
的路径,读取文件内容,通过e(file)将文件写入到下一步
而下面的协程用于监听上传是否完成,是否有错误。并将错误信息丢入errCh管道,并且关闭output这个管道,
作用在于这两个管道被用于向控制台输出。output是输出上传情况,上传完成后的块hash等,errCh就是错误信息
addAllAndPin := func(f ) error {
// Iterate over each top-level file and add individually. Otherwi the
// single f is treated as a directory, affecting hidden file
// mantics.
for {
file, err := le()
if err == {
// Finished the list of files.
break
} el if err != nil {
return err
}
if err := e(file); err != nil {
return err
}
}
// copy intermediary nodes from editor to our actual dagrvice
_, err := ze()
if err != nil {
return err
}
if hash {
return nil
}
return t()
}
errCh := make(chan error)
go func() {
var err error
defer func() { errCh <- err }()
defer clo(outChan)
err = addAllAndPin()
}()
defer ()
err = (outChan)
if err != nil {
(err)
return
}
err = <-errCh
if err != nil {
or(err, mal)
}
下面进入具体上传工作的函数,也就是e(file),fileAddrer是上面生成一个AddedObject这个结构体的对象,它有一些工
具方法,AddFile就是用于上传的对外接口,在core/coreunix/文件中
func (adder *Adder) AddFile(file ) error {
if {
er = k()
}
defer func() {
if er != nil {
()
}
}()
return e(file, fal, nil)
}
主要就是几个锁的设置,继续调用内部的addFile方法,到这里其实就以及开始上传了,后面的代码就不分析了,有兴趣的小伙伴可以自己
去看一下
下面是命令行入口的全部内容
package commands
import (
“errors”
“fmt”
“io”
“os”
“strings”
//块服务提供的接口
blockrvice “”
//核心api
core “”
//add的一些工具方法和结构
“”
//文件存储接口
filestore “”
//dag服务接口
dag “”
//提供一个新的线程安全的dag
dagtest “”
//一个可变IPFS文件系统的内存模型
mfs “”
//文件系统格式
ft “”
//控制台入口工具包 以下都是工具包
cmds “gx/ipfs/QmNueRyPRQiV7PUEpnP4GgGLuK1rKQLaRW7sfPvUetYig1/go-ipfs-cmds”
mh “gx/ipfs/QmPnFwZ2JXKnXgMw8CdBPxn7FWh6LLdjUjxV1fKHuJnkr8/go-multihash”
pb “gx/ipfs/QmPtj12fdwuAqj9sBSTNUxBNu8kCGNp8b3o8yUzMm5GHpq/pb”
offline “gx/ipfs/QmS6mo1dPpHdYsVkm27BRZDLxpKBCiJKUH8fHX15XFfMez/go-ipfs-exchange-offline”
bstore “gx/ipfs/QmadMhXJLHMFjpRmh85XjpmVDkEtQpNYEZNRpWRvYVLrvb/go-ipfs-blockstore”
cmdkit “gx/ipfs/QmdE4gMduCKCGAcczM2F5ioYDfdeKuPix138wrES1YSr7f/go-ipfs-cmdkit”
files “gx/ipfs/QmdE4gMduCKCGAcczM2F5ioYDfdeKuPix138wrES1YSr7f/go-ipfs-cmdkit/files”
)
//限制深度 深度达到上限
// ErrDepthLimitExceeded indicates that the max depth has been exceeded.
var ErrDepthLimitExceeded = (“depth limit exceeded”)
//构建命令选项参数常量
const (
quietOptionName = “quiet”
quieterOptionName = “quieter”
silentOptionName = “silent”
progressOptionName = “progress”
trickleOptionName = “trickle”
wrapOptionName = “wrap-with-directory”
hiddenOptionName = “hidden”
onlyHashOptionName = “only-hash”
chunkerOptionName = “chunker”
pinOptionName = “pin”
rawLeavesOptionName = “raw-leaves”
noCopyOptionName = “nocopy”
fstoreCacheOptionName = “fscache”
cidVersionOptionName = “cid-version”
hashOptionName = “hash”
)
//管道上限
const adderOutChanSize = 8
//构建一个命令
var AddCmd = &d{
//命令对应的帮助信息
Helptext: xt{
Tagline: “Add a file or directory to ipfs.”,
ShortDescription:
Adds contents of
,
LongDescription: `
Adds contents of to ipfs. U -r to add directories.
Note that directories are added recursively, to form the ipfs
MerkleDAG.
The wrap option, ‘-w’, wraps the file (or files, if using the
recursive option) in a directory. This directory contains only
the files which have been added, and means that the file retains
its filename. For example:
ipfs add
added QmbFMke1KXqnYyBBWxB74N4c5SBnJMVAiMNRcGu6x1AwQH
ipfs add -w
added QmbFMke1KXqnYyBBWxB74N4c5SBnJMVAiMNRcGu6x1AwQH
added QmaG4FuMqEBnQNn3C8XJ5bpW8kLs7zq2ZXgHptJHbKDDVx
You can now refer to the added file in a gateway, like so:
/ipfs/QmaG4FuMqEBnQNn3C8XJ5bpW8kLs7zq2ZXgHptJHbKDDVx/
The chunker option, ‘-s’, specifies the chunking strategy that dictates
how to break files into blocks. Blocks with same content can
be deduplicated. The default is a fixed block size of
256 * 1024 bytes, ‘size-262144’. Alternatively, you can u the
rabin chunker for content defined chunking by specifying
rabin-[min]-[avg]-[max] (where min/avg/max refer to the resulting
chunk sizes). Using other chunking strategies will produce
different hashes for the same file.
ipfs add --chunker=size-2048
added QmafrLBfzRLV4XSH1XcaMMeaXEUhDJjmtDfsYU95TrWG87
ipfs add --chunker=rabin-512-1024-2048
added Qmf1hDN65tR55Ubh2RN1FPxr69xq3giVBz1KApsresY8Gn
You can now check what blocks have been created by:
ipfs object links QmafrLBfzRLV4XSH1XcaMMeaXEUhDJjmtDfsYU95TrWG87
QmY6yj1GrmExDXoosVE3aSPxdMNYr6aKuw3nA8LoWPRS 2059
Qmf7ZQeSxq2fJVJbCmgTrLLVN9tDR9Wy5k75DxQKuz5Gyt 1195
ipfs object links Qmf1hDN65tR55Ubh2RN1FPxr69xq3giVBz1KApsresY8Gn
QmY6yj1GrmExDXoosVE3aSPxdMNYr6aKuw3nA8LoWPRS 2059
QmerURi9k4XzKCaaPbsK6BL5pMEjF7PGphjDvkkjDtsVf3 868
QmQB28iwSriSUSMqG2nXDTLtdPHgWb4rebBrU7Q1j4vxPv 338
`,
},
//命令对应参数格式
Arguments: []nt{
g(“path”, true, true, “The path to a file to be added to ipfs.”).EnableRecursive().EnableStdin(),
},
//命令参数可选项设置
Options: []{
//注意所有带有experimental的命令选项都是实验的部分需要在配置文件中启用如果需要使用测试的话
RecursivePath, // a builtin option that allows recursive paths (-r, --recursive)
tion(quietOptionName, “q”, “Write minimal output.”),
tion(quieterOptionName, “Q”, “Write only final hash.”),
tion(silentOptionName, “Write no output.”),
tion(progressOptionName, “p”, “Stream progress data.”),
tion(trickleOptionName, “t”, “U trickle-dag format for dag generation.”),
tion(onlyHashOptionName, “n”, “Only chunk and hash - do not write to disk.”),
tion(wrapOptionName, “w”, “Wrap files with a directory object.”),
tion(hiddenOptionName, “H”, “Include files that are hidden. Only takes effect on recursive add.”),
Option(chunkerOptionName, “s”, “Chunking algorithm, size-[bytes] or rabin-[min]-[avg]-
[max]”).WithDefault(“size-262144”),
tion(pinOptionName, “Pin this object when adding.”).WithDefault(true),
tion(rawLeavesOptionName, “U raw blocks for leaf nodes. (experimental)”),
tion(noCopyOptionName, “Add the file using filestore. Implies raw-leaves. (experimental)”),
tion(fstoreCacheOptionName, “Check the filestore for pre-existing blocks. (experimental)”),
ion(cidVersionOptionName, “CID version. Defaults to 0 unless an option that depends on CIDv1 is
pasd. (experimental)”),
Option(hashOptionName, “Hash function to u. Implies CIDv1 if not sha2-256.
(experimental)”).WithDefault(“sha2-256”),
},
//设置命令默认选项的默认值,节点启动时运行
PreRun: func(req *t, env nment) error {
quiet, _ := s[quietOptionName].(bool)
quieter, _ := s[quieterOptionName].(bool)
quiet = quiet || quieter
silent, _ := s[silentOptionName].(bool)
if quiet || silent {
return nil
}
// ipfs cli progress bar defaults to true unless quiet or silent is ud
_, found := s[progressOptionName].(bool)
if !found {
s[progressOptionName] = true
}
return nil
},
//在控制台命令时调用run
Run: func(req *t, res Emitter, env nment) {
//获取IPFSNode的配置信息
n, err := GetNode(env)
if err != nil {
or(err, mal)
return
}
//获取IPFS全局配置文件配置信息
cfg, err := ()
if err != nil {
or(err, mal)
return
}
// check if repo will exceed storage limit if added
// TODO: this doesn’t handle the ca if the hashed file is already in blocks (deduplicated)
// TODO: conditional GC is disabled due to it is somehow not possible to pass the size to the daemon
//if err := ionalGC(t(), n, uint64(size)); err != nil {
// or(err, mal)
// return
//}
//将所有的命令参数对应的值强转bool 以用来验证该命令参数有没有被使用
progress, _ := s[progressOptionName].(bool)
trickle, _ := s[trickleOptionName].(bool)
wrap, _ := s[wrapOptionName].(bool)
hash, _ := s[onlyHashOptionName].(bool)
hidden, _ := s[hiddenOptionName].(bool)
silent, _ := s[silentOptionName].(bool)
chunker, _ := s[chunkerOptionName].(string)
dopin, _ := s[pinOptionName].(bool)
rawblks, rbt := s[rawLeavesOptionName].(bool)
nocopy, _ := s[noCopyOptionName].(bool)
fscache, _ := s[fstoreCacheOptionName].(bool)
cidVer, cidVerSet := s[cidVersionOptionName].(int)
hashFunStr, _ := s[hashOptionName].(string)
// The arguments are subject to the following constraints.
//
// nocopy -> filestoreEnabled
// nocopy -> rawblocks
// (hash != sha2-256) -> cidv1
// NOTE: 'rawblocks -> cidv1' is missing. Legacy reasons.
// nocopy -> filestoreEnabled
//实验方法,具体可以自行实验
if nocopy && !oreEnabled {
or(estoreNotEnabled, ent)
return
}
//实验方法,具体可以自行实验
// nocopy -> rawblocks
if nocopy && !rawblks {
// fixed?
if rbt {
or(
("nocopy option requires '--raw-leaves' to be enabled as well"),
mal,
)
return
return
}
// No, satisfy mandatory constraint.
rawblks = true
}
//实验方法,具体可以自行实验
// (hash != "sha2-256") -> CIDv1
if hashFunStr != "sha2-256" && cidVer == 0 {
if cidVerSet {
or(
("CIDv0 only supports sha2-256"),
ent,
)
return
}
cidVer = 1
}
//实验方法,具体可以自行实验
// cidV1 -> raw blocks (by default)
if cidVer > 0 && !rbt {
rawblks = true
}
//实验方法,具体可以自行实验
prefix, err := ForCidVersion(cidVer)
if err != nil {
or(err, mal)
return
}
hashFunCode, ok := [r(hashFunStr)]
if !ok {
or(("unrecognized hash function: %s", r(hashFunStr)), mal)
return
}
= hashFunCode
th = -1
//如果使用 -n 命令参数 只写入块hash,不写入磁盘
if hash {
nilnode, err := e(t(), &fg{
//TODO: need this to be true or all files
// hashed will be stored in memory!
NilRepo: true,
})
if err != nil {
or(err, mal)
return
}
n = nilnode
}
//一个可以回收的块存储
addblockstore := tore
//如果true 就构建一个新的可以回收的块
if !(fscache || nocopy) {
addblockstore = lockstore(ocks, er)
}
//基本上不会被执行,可能是版本跟新代码没有删除干净
exch := ge
local, _ := s["local"].(bool)
if local {
exch = ge(addblockstore)
}
//通过块服务构建一个新的块启用块的交换策略
brv := (addblockstore, exch) // hash curity 001
//将块交给dag服务管理
drv := Service(brv)
drv := Service(brv)
//新建输出管道 设置长度
outChan := make(chan interface{}, adderOutChanSize)
//返回于文件添加操作的新文件对象
fileAdder, err := er(t, g, tore, drv)
if err != nil {
or(err, mal)
return
}
//为文件对象设置属性
= outChan
r = chunker
ss = progress
= hidden
e = trickle
= wrap
= dopin
= silent
ves = rawblks
= nocopy
= &prefix
//如果使用 -n 命令参数 只写入块hash,不写入磁盘
if hash {
//获取一个新的线程安全的dag
md := ()
emptyDirNode := irNode()
// U the same prefix for the "empty" MFS root as for the file adder.
= *
mr, err := t(t, md, emptyDirNode, nil)
if err != nil {
or(err, mal)
return
}
Root(mr)
}
//构建文件上传io
addAllAndPin := func(f ) error {
// Iterate over each top-level file and add individually. Otherwi the
// single f is treated as a directory, affecting hidden file
// mantics.
//每次读取一个文件保存到新的文件对象fileAdder中
for {
file, err := le()
if err == {
// Finished the list of files.
break
} el if err != nil {
return err
}
if err := e(file); err != nil {
return err
}
}
// copy intermediary nodes from editor to our actual dagrvice
// Finalize方法刷新mfs根目录并返回mfs根节点。
_, err := ze()
if err != nil {
return err
}
if hash {
return nil
}
//递归新的我那件对象和根节点
//递归新的我那件对象和根节点
//将pin节点状态写入后台数据存储。
return t()
}
errCh := make(chan error)
//开启协程进行文件上传
go func() {
//一个错误变量
var err error
//defer一个管道接收err变量 存放文件传输过程中出现的错误
defer func() { errCh <- err }()
defer clo(outChan)
//传输文件并返回错误信息
err = addAllAndPin()
}()
//关闭链接
defer ()
//res 错误
err = (outChan)
if err != nil {
(err)
return
}
//传输错误
err = < -errCh
if err != nil {
or(err, mal)
}
},
//返回执行结果到命令行
PostRun: nMap{
//实习接口方法
: func(req *t, re Emitter) Emitter {
//新建一个输出通道标准格式
reNext, res := nResponPair(req)
//add 命令行返回值 文件hash信息 存储管道
outChan := make(chan interface{})
//add 命令行方悔之 文件大小 存储管道
sizeChan := make(chan int64, 1)
//通过文件对象获取文件大小对象
sizeFile, ok := .(le)
//如果获取文件对象成功
if ok {
// Could be slow.
go func() {
//通过文件对象获取大小
size, err := ()
if err != nil {
gf(“error getting files size: %s”, err)
// e comment above
return
}
//将文件大小存到文件大小管道中
sizeChan <- size
}()
} el {
//不能获得上传文件的大小
// we don’t need to error, the progress bar just
// won’t know how big the files are
g(“cannot determine size of input file”)
}
//进度条
progressBar := func(wait chan struct{}) {
defer clo(wait)
quiet, _ := s[quietOptionName].(bool)
quieter, _ := s[quieterOptionName].(bool)
quiet = quiet || quieter
progress, _ := s[progressOptionName].(bool)
var bar *ssBar
if progress {
bar = 64(0).SetUnits(pb.U_BYTES)
Update = true
meLeft = fal
rcent = fal
=
()
}
lastFile := ""
lastHash := ""
var totalProgress, prevFiles, lastBytes int64
LOOP:
for {
for {
lect {
ca out, ok := <-outChan:
if !ok {
if quieter {
ln(, lastHash)
}
break LOOP
}
output := out.(*bject)
if len() > 0 {
lastHash =
if quieter {
continue
}
if progress {
// clear progress bar line before we print "added x" output
f(, "033[2Kr")
}
if quiet {
f(, "%sn", )
} el {
f(, "added %s %sn", , )
}
} el {
if !progress {
continue
}
if len(lastFile) == 0 {
lastFile =
}
if != lastFile || < lastBytes {
prevFiles += lastBytes
lastFile =
}
lastBytes =
delta := prevFiles + lastBytes - totalProgress
totalProgress = 64(delta)
}
if progress {
()
}
ca size := <-sizeChan:
if progress {
= size
rcent = true
r = true
meLeft = true
}
ca <-():
// don't t or print error here, that happens in the goroutine below
return
}
}
}
//控制文件上传时的制度条显示
go func() {
// defer order important! First clo outChan, then wait for output to finish, then clo re
defer ()
if e := (); e != nil {
if e := (); e != nil {
defer clo(outChan)
or(e, )
return
}
wait := make(chan struct{})
go progressBar(wait)
defer func() { <-wait }()
defer clo(outChan)
for {
v, err := ()
if !Error(err, res, re) {
break
}
lect {
ca outChan <- v:
ca <-():
or((), mal)
return
}
}
}()
return reNext
},
},
//添加一个object对象
Type: bject{},
}
本文发布于:2023-12-09 21:26:38,感谢您对本站的认可!
本文链接:https://www.wtabcd.cn/zhishi/a/1702128399240285.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文word下载地址:IPFS(三)源码解读之-add.doc
本文 PDF 下载地址:IPFS(三)源码解读之-add.pdf
留言与评论(共有 0 条评论) |