uspv: remove package as it's no longer actively used within the project

This commit is contained in:
Olaoluwa Osuntokun 2017-01-05 13:27:03 -08:00
parent 798b0b9c9f
commit 5d37c1d9e7
No known key found for this signature in database
GPG Key ID: 9CC5B105D03521A2
11 changed files with 0 additions and 2964 deletions

@ -1,70 +0,0 @@
# uspv - micro-SPV library
The uspv library implements simplified SPV wallet functionality.
It connects to full nodes using the standard port 8333 bitcoin protocol,
gets headers, uses bloom filters, gets blocks and transactions, and has
functions to send and receive coins.
## Files
Three files are used by the library:
#### Key file (currently testkey.hex)
This file contains the secret seed which creates all private keys used by the wallet. It is stored in ascii hexadecimal for easy copying and pasting. If you don't enter a password when prompted, you'll get a warning and the key file will be saved in the clear with no encryption. You shouldn't do that though. When using a password, the key file will be longer and use the scrypt KDF and nacl/secretbox to protect the secret seed.
#### Header file (currently headers.bin)
This is a file storing all the block headers. Headers are 80 bytes long, so this file's size will always be an even multiple of 80. All blockchain-technology verifications are performed when appending headers to the file. In the case of re-orgs, since it's so quick to get headers, it just truncates a bit and tries again.
#### Database file (currently utxo.db)
This file more complex. It uses bolt DB to store wallet information needed to send and receive bitcoins. The database file is organized into 4 main "buckets":
* Utxos ("DuffelBag")
This bucket stores all the utxos. The goal of bitcoin is to get lots of utxos, earning a high score.
* Stxos ("SpentTxs")
For record keeping, this bucket stores what used to be utxos, but are no longer "u"txos, and are spent outpoints. It references the spending txid.
* Txns ("Txns")
This bucket stores full serialized transactions which are refenced in the Stxos bucket. These can be used to re-play transactions in the case of re-orgs.
* State ("MiscState")
This has describes some miscellaneous global state variables of the database, such as what height it has synchronized up to, and how many addresses have been created. (Currently those are the only 2 things stored)
## Synchronization overview
Currently uspv only connects to one hard-coded node, as address messages and storage are not yet implemented. It first asks for headers, providing the last known header (writing the genesis header if needed). It loops through asking for headers until it receives an empty header message, which signals that headers are fully synchronized.
After header synchronization is complete, it requests merkle blocks starting at the keyfile birthday. (This is currently hard-coded; add new db key?) Bloom filters are generated for the addresses and utxos known to the wallet. If too many false positives are received, a new filter is generated and sent. (This happens fairly often because the filter exponentially saturates with false positives when using BloomUpdateAll.) Once the merkle blocks have been received up to the header height, the wallet is considered synchronized and it will listen for new inv messages from the remote node. An inv message describing a block will trigger a request for headers, starting the same synchronization process of headers then merkle-blocks.
## TODO
There's still quite a bit left, though most of it hopefully won't be too hard.
Problems / still to do:
* Only connects to one node, and that node is hard-coded.
* Re-orgs affect only headers, and don't evict confirmed transactions.
* Double spends are not detected; Double spent txs will stay at height 0.
* Tx creation and signing is still very rudimentary.
* There may be wire-protocol irregularities which can get it kicked off.
Hopefully I can get most of that list deleted soon.
(Now sanity checks txs, but can't check sigs... because it's SPV. Right.)
Later functionality to implement:
* "Desktop Mode" SPV, or "Unfiltered" SPV or some other name
This would be a mode where uspv doesn't use bloom filters and request merkle blocks, but instead grabs everything in the block and discards most of the data. This prevents nodes from learning about your utxo set. To further enhance this, it should connect to multiple nodes and relay txs and inv messages to blend in.
* Ironman SPV
Never request txs. Only merkleBlocks (or in above mode, blocks). No unconfirmed transactions are presented to the user, which makes a whole lot of sense as with unconfirmed SPV transactions you're relying completely on the honesty of the reporting node.

@ -1,406 +0,0 @@
package uspv
import (
"fmt"
"log"
"net"
"os"
"sync"
"github.com/roasbeef/btcd/wire"
)
const (
keyFileName = "testseed.hex"
headerFileName = "headers.bin"
// version hardcoded for now, probably ok...?
// 70012 is for segnet... make this a init var?
VERSION = 70012
)
type SPVCon struct {
con net.Conn // the (probably tcp) connection to the node
// Enhanced SPV modes for users who have outgrown easy mode SPV
// but have not yet graduated to full nodes.
HardMode bool // hard mode doesn't use filters.
Ironman bool // ironman only gets blocks, never requests txs.
headerMutex sync.Mutex
headerFile *os.File // file for SPV headers
//[doesn't work without fancy mutexes, nevermind, just use header file]
// localHeight int32 // block height we're on
remoteHeight int32 // block height they're on
localVersion uint32 // version we report
remoteVersion uint32 // version remote node
// what's the point of the input queue? remove? leave for now...
inMsgQueue chan wire.Message // Messages coming in from remote node
outMsgQueue chan wire.Message // Messages going out to remote node
WBytes uint64 // total bytes written
RBytes uint64 // total bytes read
TS *TxStore // transaction store to write to
// mBlockQueue is for keeping track of what height we've requested.
blockQueue chan HashAndHeight
// fPositives is a channel to keep track of bloom filter false positives.
fPositives chan int32
// waitState is a channel that is empty while in the header and block
// sync modes, but when in the idle state has a "true" in it.
inWaitState chan bool
}
// AskForTx requests a tx we heard about from an inv message.
// It's one at a time but should be fast enough.
// I don't like this function because SPV shouldn't even ask...
func (s *SPVCon) AskForTx(txid wire.ShaHash) {
gdata := wire.NewMsgGetData()
inv := wire.NewInvVect(wire.InvTypeTx, &txid)
gdata.AddInvVect(inv)
s.outMsgQueue <- gdata
}
// HashAndHeight is needed instead of just height in case a fullnode
// responds abnormally (?) by sending out of order merkleblocks.
// we cache a merkleroot:height pair in the queue so we don't have to
// look them up from the disk.
// Also used when inv messages indicate blocks so we can add the header
// and parse the txs in one request instead of requesting headers first.
type HashAndHeight struct {
blockhash wire.ShaHash
height int32
final bool // indicates this is the last merkleblock requested
}
// NewRootAndHeight saves like 2 lines.
func NewRootAndHeight(b wire.ShaHash, h int32) (hah HashAndHeight) {
hah.blockhash = b
hah.height = h
return
}
func (s *SPVCon) RemoveHeaders(r int32) error {
endPos, err := s.headerFile.Seek(0, os.SEEK_END)
if err != nil {
return err
}
err = s.headerFile.Truncate(endPos - int64(r*80))
if err != nil {
return fmt.Errorf("couldn't truncate header file")
}
return nil
}
func (s *SPVCon) IngestMerkleBlock(m *wire.MsgMerkleBlock) {
txids, err := checkMBlock(m) // check self-consistency
if err != nil {
log.Printf("Merkle block error: %s\n", err.Error())
return
}
var hah HashAndHeight
select { // select here so we don't block on an unrequested mblock
case hah = <-s.blockQueue: // pop height off mblock queue
break
default:
log.Printf("Unrequested merkle block")
return
}
// this verifies order, and also that the returned header fits
// into our SPV header file
newMerkBlockSha := m.Header.BlockSha()
if !hah.blockhash.IsEqual(&newMerkBlockSha) {
log.Printf("merkle block out of order got %s expect %s",
m.Header.BlockSha().String(), hah.blockhash.String())
log.Printf("has %d hashes %d txs flags: %x",
len(m.Hashes), m.Transactions, m.Flags)
return
}
for _, txid := range txids {
err := s.TS.AddTxid(txid, hah.height)
if err != nil {
log.Printf("Txid store error: %s\n", err.Error())
return
}
}
// write to db that we've sync'd to the height indicated in the
// merkle block. This isn't QUITE true since we haven't actually gotten
// the txs yet but if there are problems with the txs we should backtrack.
err = s.TS.SetDBSyncHeight(hah.height)
if err != nil {
log.Printf("Merkle block error: %s\n", err.Error())
return
}
if hah.final {
// don't set waitstate; instead, ask for headers again!
// this way the only thing that triggers waitstate is asking for headers,
// getting 0, calling AskForMerkBlocks(), and seeing you don't need any.
// that way you are pretty sure you're synced up.
err = s.AskForHeaders()
if err != nil {
log.Printf("Merkle block error: %s\n", err.Error())
return
}
}
return
}
// IngestHeaders takes in a bunch of headers and appends them to the
// local header file, checking that they fit. If there's no headers,
// it assumes we're done and returns false. If it worked it assumes there's
// more to request and returns true.
func (s *SPVCon) IngestHeaders(m *wire.MsgHeaders) (bool, error) {
gotNum := int64(len(m.Headers))
if gotNum > 0 {
fmt.Printf("got %d headers. Range:\n%s - %s\n",
gotNum, m.Headers[0].BlockSha().String(),
m.Headers[len(m.Headers)-1].BlockSha().String())
} else {
log.Printf("got 0 headers, we're probably synced up")
return false, nil
}
s.headerMutex.Lock()
defer s.headerMutex.Unlock()
var err error
// seek to last header
_, err = s.headerFile.Seek(-80, os.SEEK_END)
if err != nil {
return false, err
}
var last wire.BlockHeader
err = last.Deserialize(s.headerFile)
if err != nil {
return false, err
}
prevHash := last.BlockSha()
endPos, err := s.headerFile.Seek(0, os.SEEK_END)
if err != nil {
return false, err
}
tip := int32(endPos/80) - 1 // move back 1 header length to read
// check first header returned to make sure it fits on the end
// of our header file
if !m.Headers[0].PrevBlock.IsEqual(&prevHash) {
// delete 100 headers if this happens! Dumb reorg.
log.Printf("reorg? header msg doesn't fit. points to %s, expect %s",
m.Headers[0].PrevBlock.String(), prevHash.String())
if endPos < 8080 {
// jeez I give up, back to genesis
s.headerFile.Truncate(80)
} else {
err = s.headerFile.Truncate(endPos - 8000)
if err != nil {
return false, fmt.Errorf("couldn't truncate header file")
}
}
return true, fmt.Errorf("Truncated header file to try again")
}
for _, resphdr := range m.Headers {
// write to end of file
err = resphdr.Serialize(s.headerFile)
if err != nil {
return false, err
}
// advance chain tip
tip++
// check last header
worked := CheckHeader(s.headerFile, tip, s.TS.Param)
if !worked {
if endPos < 8080 {
// jeez I give up, back to genesis
s.headerFile.Truncate(80)
} else {
err = s.headerFile.Truncate(endPos - 8000)
if err != nil {
return false, fmt.Errorf("couldn't truncate header file")
}
}
// probably should disconnect from spv node at this point,
// since they're giving us invalid headers.
return true, fmt.Errorf(
"Header %d - %s doesn't fit, dropping 100 headers.",
resphdr.BlockSha().String(), tip)
}
}
log.Printf("Headers to height %d OK.", tip)
return true, nil
}
func (s *SPVCon) AskForHeaders() error {
var hdr wire.BlockHeader
ghdr := wire.NewMsgGetHeaders()
ghdr.ProtocolVersion = s.localVersion
s.headerMutex.Lock() // start header file ops
info, err := s.headerFile.Stat()
if err != nil {
return err
}
headerFileSize := info.Size()
if headerFileSize == 0 || headerFileSize%80 != 0 { // header file broken
return fmt.Errorf("Header file not a multiple of 80 bytes")
}
// seek to 80 bytes from end of file
ns, err := s.headerFile.Seek(-80, os.SEEK_END)
if err != nil {
log.Printf("can't seek\n")
return err
}
log.Printf("suk to offset %d (should be near the end\n", ns)
// get header from last 80 bytes of file
err = hdr.Deserialize(s.headerFile)
if err != nil {
log.Printf("can't Deserialize")
return err
}
s.headerMutex.Unlock() // done with header file
cHash := hdr.BlockSha()
err = ghdr.AddBlockLocatorHash(&cHash)
if err != nil {
return err
}
fmt.Printf("get headers message has %d header hashes, first one is %s\n",
len(ghdr.BlockLocatorHashes), ghdr.BlockLocatorHashes[0].String())
s.outMsgQueue <- ghdr
return nil
}
// AskForOneBlock is for testing only, so you can ask for a specific block height
// and see what goes wrong
func (s *SPVCon) AskForOneBlock(h int32) error {
var hdr wire.BlockHeader
var err error
dbTip := int32(h)
s.headerMutex.Lock() // seek to header we need
_, err = s.headerFile.Seek(int64((dbTip)*80), os.SEEK_SET)
if err != nil {
return err
}
err = hdr.Deserialize(s.headerFile) // read header, done w/ file for now
s.headerMutex.Unlock() // unlock after reading 1 header
if err != nil {
log.Printf("header deserialize error!\n")
return err
}
bHash := hdr.BlockSha()
// create inventory we're asking for
iv1 := wire.NewInvVect(wire.InvTypeWitnessBlock, &bHash)
gdataMsg := wire.NewMsgGetData()
// add inventory
err = gdataMsg.AddInvVect(iv1)
if err != nil {
return err
}
hah := NewRootAndHeight(bHash, h)
s.outMsgQueue <- gdataMsg
s.blockQueue <- hah // push height and mroot of requested block on queue
return nil
}
// AskForMerkBlocks requests blocks from current to last
// right now this asks for 1 block per getData message.
// Maybe it's faster to ask for many in a each message?
func (s *SPVCon) AskForBlocks() error {
var hdr wire.BlockHeader
s.headerMutex.Lock() // lock just to check filesize
stat, err := os.Stat(headerFileName)
s.headerMutex.Unlock() // checked, unlock
endPos := stat.Size()
headerTip := int32(endPos/80) - 1 // move back 1 header length to read
dbTip, err := s.TS.GetDBSyncHeight()
if err != nil {
return err
}
fmt.Printf("dbTip %d headerTip %d\n", dbTip, headerTip)
if dbTip > headerTip {
return fmt.Errorf("error- db longer than headers! shouldn't happen.")
}
if dbTip == headerTip {
// nothing to ask for; set wait state and return
fmt.Printf("no blocks to request, entering wait state\n")
fmt.Printf("%d bytes received\n", s.RBytes)
s.inWaitState <- true
// also advertise any unconfirmed txs here
s.Rebroadcast()
return nil
}
fmt.Printf("will request blocks %d to %d\n", dbTip+1, headerTip)
if !s.HardMode { // don't send this in hardmode! that's the whole point
// create initial filter
filt, err := s.TS.GimmeFilter()
if err != nil {
return err
}
// send filter
s.SendFilter(filt)
fmt.Printf("sent filter %x\n", filt.MsgFilterLoad().Filter)
}
// loop through all heights where we want merkleblocks.
for dbTip < headerTip {
dbTip++ // we're requesting the next header
// load header from file
s.headerMutex.Lock() // seek to header we need
_, err = s.headerFile.Seek(int64((dbTip)*80), os.SEEK_SET)
if err != nil {
return err
}
err = hdr.Deserialize(s.headerFile) // read header, done w/ file for now
s.headerMutex.Unlock() // unlock after reading 1 header
if err != nil {
log.Printf("header deserialize error!\n")
return err
}
bHash := hdr.BlockSha()
// create inventory we're asking for
iv1 := new(wire.InvVect)
// if hardmode, ask for legit blocks, none of this ralphy stuff
if s.HardMode {
iv1 = wire.NewInvVect(wire.InvTypeWitnessBlock, &bHash)
} else { // ah well
iv1 = wire.NewInvVect(wire.InvTypeFilteredWitnessBlock, &bHash)
}
gdataMsg := wire.NewMsgGetData()
// add inventory
err = gdataMsg.AddInvVect(iv1)
if err != nil {
return err
}
hah := NewRootAndHeight(hdr.BlockSha(), dbTip)
if dbTip == headerTip { // if this is the last block, indicate finality
hah.final = true
}
// waits here most of the time for the queue to empty out
s.blockQueue <- hah // push height and mroot of requested block on queue
s.outMsgQueue <- gdataMsg
}
return nil
}

@ -1,231 +0,0 @@
package uspv
import (
"bytes"
"fmt"
"log"
"github.com/roasbeef/btcd/wire"
"github.com/roasbeef/btcutil"
"github.com/roasbeef/btcutil/bloom"
)
var (
WitMagicBytes = []byte{0x6a, 0x24, 0xaa, 0x21, 0xa9, 0xed}
)
// BlockRootOK checks for block self-consistency.
// If the block has no wintess txs, and no coinbase witness commitment,
// it only checks the tx merkle root. If either a witness commitment or
// any witnesses are detected, it also checks that as well.
// Returns false if anything goes wrong, true if everything is fine.
func BlockOK(blk wire.MsgBlock) bool {
var txids, wtxids []*wire.ShaHash // txids and wtxids
// witMode true if any tx has a wintess OR coinbase has wit commit
var witMode bool
for _, tx := range blk.Transactions { // make slice of (w)/txids
txid := tx.TxSha()
wtxid := tx.WitnessHash()
if !witMode && !txid.IsEqual(&wtxid) {
witMode = true
}
txids = append(txids, &txid)
wtxids = append(wtxids, &wtxid)
}
var commitBytes []byte
// try to extract coinbase witness commitment (even if !witMode)
cb := blk.Transactions[0] // get coinbase tx
for i := len(cb.TxOut) - 1; i >= 0; i-- { // start at the last txout
if bytes.HasPrefix(cb.TxOut[i].PkScript, WitMagicBytes) &&
len(cb.TxOut[i].PkScript) > 37 {
// 38 bytes or more, and starts with WitMagicBytes is a hit
commitBytes = cb.TxOut[i].PkScript[6:38]
witMode = true // it there is a wit commit it must be valid
}
}
if witMode { // witmode, so check witness tree
// first find ways witMode can be disqualified
if len(commitBytes) != 32 {
// witness in block but didn't find a wintess commitment; fail
log.Printf("block %s has witness but no witcommit",
blk.BlockSha().String())
return false
}
if len(cb.TxIn) != 1 {
log.Printf("block %s coinbase tx has %d txins (must be 1)",
blk.BlockSha().String(), len(cb.TxIn))
return false
}
if len(cb.TxIn[0].Witness) != 1 {
log.Printf("block %s coinbase has %d witnesses (must be 1)",
blk.BlockSha().String(), len(cb.TxIn[0].Witness))
return false
}
if len(cb.TxIn[0].Witness[0]) != 32 {
log.Printf("block %s coinbase has %d byte witness nonce (not 32)",
blk.BlockSha().String(), len(cb.TxIn[0].Witness[0]))
return false
}
// witness nonce is the cb's witness, subject to above constraints
witNonce, err := wire.NewShaHash(cb.TxIn[0].Witness[0])
if err != nil {
log.Printf("Witness nonce error: %s", err.Error())
return false // not sure why that'd happen but fail
}
var empty [32]byte
wtxids[0].SetBytes(empty[:]) // coinbase wtxid is 0x00...00
// witness root calculated from wtixds
witRoot := calcRoot(wtxids)
calcWitCommit := wire.DoubleSha256SH(
append(witRoot.Bytes(), witNonce.Bytes()...))
// witness root given in coinbase op_return
givenWitCommit, err := wire.NewShaHash(commitBytes)
if err != nil {
log.Printf("Witness root error: %s", err.Error())
return false // not sure why that'd happen but fail
}
// they should be the same. If not, fail.
if !calcWitCommit.IsEqual(givenWitCommit) {
log.Printf("Block %s witRoot error: calc %s given %s",
blk.BlockSha().String(),
calcWitCommit.String(), givenWitCommit.String())
return false
}
}
// got through witMode check so that should be OK;
// check regular txid merkleroot. Which is, like, trivial.
return blk.Header.MerkleRoot.IsEqual(calcRoot(txids))
}
// calcRoot calculates the merkle root of a slice of hashes.
func calcRoot(hashes []*wire.ShaHash) *wire.ShaHash {
for len(hashes) < int(nextPowerOfTwo(uint32(len(hashes)))) {
hashes = append(hashes, nil) // pad out hash slice to get the full base
}
for len(hashes) > 1 { // calculate merkle root. Terse, eh?
hashes = append(hashes[2:], MakeMerkleParent(hashes[0], hashes[1]))
}
return hashes[0]
}
func (ts *TxStore) Refilter() error {
allUtxos, err := ts.GetAllUtxos()
if err != nil {
return err
}
filterElements := uint32(len(allUtxos) + len(ts.Adrs))
ts.localFilter = bloom.NewFilter(filterElements, 0, 0, wire.BloomUpdateAll)
for _, u := range allUtxos {
ts.localFilter.AddOutPoint(&u.Op)
}
for _, a := range ts.Adrs {
ts.localFilter.Add(a.PkhAdr.ScriptAddress())
}
msg := ts.localFilter.MsgFilterLoad()
fmt.Printf("made %d element filter: %x\n", filterElements, msg.Filter)
return nil
}
// IngestBlock is like IngestMerkleBlock but aralphic
// different enough that it's better to have 2 separate functions
func (s *SPVCon) IngestBlock(m *wire.MsgBlock) {
var err error
// var buf bytes.Buffer
// m.SerializeWitness(&buf)
// fmt.Printf("block hex %x\n", buf.Bytes())
// for _, tx := range m.Transactions {
// fmt.Printf("wtxid: %s\n", tx.WTxSha())
// fmt.Printf(" txid: %s\n", tx.TxSha())
// fmt.Printf("%d %s", i, TxToString(tx))
// }
ok := BlockOK(*m) // check block self-consistency
if !ok {
fmt.Printf("block %s not OK!!11\n", m.BlockSha().String())
return
}
var hah HashAndHeight
select { // select here so we don't block on an unrequested mblock
case hah = <-s.blockQueue: // pop height off mblock queue
break
default:
log.Printf("Unrequested full block")
return
}
newBlockSha := m.Header.BlockSha()
if !hah.blockhash.IsEqual(&newBlockSha) {
log.Printf("full block out of order error")
return
}
fPositive := 0 // local filter false positives
reFilter := 10 // after that many false positives, regenerate filter.
// 10? Making it up. False positives have disk i/o cost, and regenning
// the filter also has costs. With a large local filter, false positives
// should be rare.
// iterate through all txs in the block, looking for matches.
// use a local bloom filter to ignore txs that don't affect us
for _, tx := range m.Transactions {
utilTx := btcutil.NewTx(tx)
if s.TS.localFilter.MatchTxAndUpdate(utilTx) {
hits, err := s.TS.Ingest(tx, hah.height)
if err != nil {
log.Printf("Incoming Tx error: %s\n", err.Error())
return
}
if hits > 0 {
// log.Printf("block %d tx %d %s ingested and matches %d utxo/adrs.",
// hah.height, i, tx.TxSha().String(), hits)
} else {
fPositive++ // matched filter but no hits
}
}
}
if fPositive > reFilter {
fmt.Printf("%d filter false positives in this block\n", fPositive)
err = s.TS.Refilter()
if err != nil {
log.Printf("Refilter error: %s\n", err.Error())
return
}
}
// write to db that we've sync'd to the height indicated in the
// merkle block. This isn't QUITE true since we haven't actually gotten
// the txs yet but if there are problems with the txs we should backtrack.
err = s.TS.SetDBSyncHeight(hah.height)
if err != nil {
log.Printf("full block sync error: %s\n", err.Error())
return
}
fmt.Printf("ingested full block %s height %d OK\n",
m.Header.BlockSha().String(), hah.height)
if hah.final { // check sync end
// don't set waitstate; instead, ask for headers again!
// this way the only thing that triggers waitstate is asking for headers,
// getting 0, calling AskForMerkBlocks(), and seeing you don't need any.
// that way you are pretty sure you're synced up.
err = s.AskForHeaders()
if err != nil {
log.Printf("Merkle block error: %s\n", err.Error())
return
}
}
return
}

@ -1,194 +0,0 @@
/* this is blockchain technology. Well, except without the blocks.
Really it's header chain technology.
The blocks themselves don't really make a chain. Just the headers do.
*/
package uspv
import (
"io"
"log"
"math/big"
"os"
"time"
"github.com/roasbeef/btcd/blockchain"
"github.com/roasbeef/btcd/chaincfg"
"github.com/roasbeef/btcd/wire"
)
// blockchain settings. These are kindof bitcoin specific, but not contained in
// chaincfg.Params so they'll go here. If you're into the [ANN]altcoin scene,
// you may want to paramaterize these constants.
const (
targetTimespan = time.Hour * 24 * 14
targetSpacing = time.Minute * 10
epochLength = int32(targetTimespan / targetSpacing) // 2016
maxDiffAdjust = 4
minRetargetTimespan = int64(targetTimespan / maxDiffAdjust)
maxRetargetTimespan = int64(targetTimespan * maxDiffAdjust)
)
/* checkProofOfWork verifies the header hashes into something
lower than specified by the 4-byte bits field. */
func checkProofOfWork(header wire.BlockHeader, p *chaincfg.Params) bool {
target := blockchain.CompactToBig(header.Bits)
// The target must more than 0. Why can you even encode negative...
if target.Sign() <= 0 {
log.Printf("block target %064x is neagtive(??)\n", target.Bytes())
return false
}
// The target must be less than the maximum allowed (difficulty 1)
if target.Cmp(p.PowLimit) > 0 {
log.Printf("block target %064x is "+
"higher than max of %064x", target, p.PowLimit.Bytes())
return false
}
// The header hash must be less than the claimed target in the header.
blockHash := header.BlockSha()
hashNum := blockchain.ShaHashToBig(&blockHash)
if hashNum.Cmp(target) > 0 {
log.Printf("block hash %064x is higher than "+
"required target of %064x", hashNum, target)
return false
}
return true
}
/* calcDiff returns a bool given two block headers. This bool is
true if the correct dificulty adjustment is seen in the "next" header.
Only feed it headers n-2016 and n-1, otherwise it will calculate a difficulty
when no adjustment should take place, and return false.
Note that the epoch is actually 2015 blocks long, which is confusing. */
func calcDiffAdjust(start, end wire.BlockHeader, p *chaincfg.Params) uint32 {
duration := end.Timestamp.UnixNano() - start.Timestamp.UnixNano()
if duration < minRetargetTimespan {
log.Printf("whoa there, block %s off-scale high 4X diff adjustment!",
end.BlockSha().String())
duration = minRetargetTimespan
} else if duration > maxRetargetTimespan {
log.Printf("Uh-oh! block %s off-scale low 0.25X diff adjustment!\n",
end.BlockSha().String())
duration = maxRetargetTimespan
}
// calculation of new 32-byte difficulty target
// first turn the previous target into a big int
prevTarget := blockchain.CompactToBig(start.Bits)
// new target is old * duration...
newTarget := new(big.Int).Mul(prevTarget, big.NewInt(duration))
// divided by 2 weeks
newTarget.Div(newTarget, big.NewInt(int64(targetTimespan)))
// clip again if above minimum target (too easy)
if newTarget.Cmp(p.PowLimit) > 0 {
newTarget.Set(p.PowLimit)
}
// calculate and return 4-byte 'bits' difficulty from 32-byte target
return blockchain.BigToCompact(newTarget)
}
func CheckHeader(r io.ReadSeeker, height int32, p *chaincfg.Params) bool {
var err error
var cur, prev, epochStart wire.BlockHeader
// don't try to verfy the genesis block. That way madness lies.
if height == 0 {
return true
}
// initial load of headers
// load epochstart, previous and current.
// get the header from the epoch start, up to 2016 blocks ago
_, err = r.Seek(int64(80*(height-(height%epochLength))), os.SEEK_SET)
if err != nil {
log.Printf(err.Error())
return false
}
err = epochStart.Deserialize(r)
if err != nil {
log.Printf(err.Error())
return false
}
// log.Printf("start epoch at height %d ", height-(height%epochLength))
// seek to n-1 header
_, err = r.Seek(int64(80*(height-1)), os.SEEK_SET)
if err != nil {
log.Printf(err.Error())
return false
}
// read in n-1
err = prev.Deserialize(r)
if err != nil {
log.Printf(err.Error())
return false
}
// seek to curHeight header and read in
_, err = r.Seek(int64(80*(height)), os.SEEK_SET)
if err != nil {
log.Printf(err.Error())
return false
}
err = cur.Deserialize(r)
if err != nil {
log.Printf(err.Error())
return false
}
// get hash of n-1 header
prevHash := prev.BlockSha()
// check if headers link together. That whole 'blockchain' thing.
if prevHash.IsEqual(&cur.PrevBlock) == false {
log.Printf("Headers %d and %d don't link.\n",
height-1, height)
log.Printf("%s - %s",
prev.BlockSha().String(), cur.BlockSha().String())
return false
}
rightBits := epochStart.Bits // normal, no adjustment; Dn = Dn-1
// see if we're on a difficulty adjustment block
if (height)%epochLength == 0 {
// if so, check if difficulty adjustment is valid.
// That whole "controlled supply" thing.
// calculate diff n based on n-2016 ... n-1
rightBits = calcDiffAdjust(epochStart, prev, p)
// done with adjustment, save new ephochStart header
epochStart = cur
log.Printf("Update epoch at height %d", height)
} else { // not a new epoch
// if on testnet, check for difficulty nerfing
if p.ResetMinDifficulty && cur.Timestamp.After(
prev.Timestamp.Add(targetSpacing*2)) {
// fmt.Printf("nerf %d ", curHeight)
rightBits = p.PowLimitBits // difficulty 1
}
if cur.Bits != rightBits {
log.Printf("Block %d %s incorrect difficuly. Read %x, expect %x\n",
height, cur.BlockSha().String(), cur.Bits, rightBits)
return false
}
}
// check if there's a valid proof of work. That whole "Bitcoin" thing.
if !checkProofOfWork(cur, p) {
log.Printf("Block %d Bad proof of work.\n", height)
return false
}
return true // it must have worked if there's no errors and got to the end.
}
/* checkrange verifies a range of headers. it checks their proof of work,
difficulty adjustments, and that they all link in to each other properly.
This is the only blockchain technology in the whole code base.
Returns false if anything bad happens. Returns true if the range checks
out with no errors. */
func CheckRange(r io.ReadSeeker, first, last int32, p *chaincfg.Params) bool {
for i := first; i <= last; i++ {
if !CheckHeader(r, i, p) {
return false
}
}
return true // all good.
}

@ -1,127 +0,0 @@
package uspv
import (
"bytes"
"io/ioutil"
"log"
"net"
"os"
"github.com/roasbeef/btcd/chaincfg"
"github.com/roasbeef/btcd/wire"
)
// OpenPV starts a
func OpenSPV(remoteNode string, hfn, dbfn string,
inTs *TxStore, hard bool, iron bool, p *chaincfg.Params) (SPVCon, error) {
// create new SPVCon
var s SPVCon
s.HardMode = hard
s.Ironman = iron
// I should really merge SPVCon and TxStore, they're basically the same
inTs.Param = p
s.TS = inTs // copy pointer of txstore into spvcon
// open header file
err := s.openHeaderFile(hfn)
if err != nil {
return s, err
}
// open TCP connection
s.con, err = net.Dial("tcp", remoteNode)
if err != nil {
return s, err
}
// assign version bits for local node
s.localVersion = VERSION
// transaction store for this SPV connection
err = inTs.OpenDB(dbfn)
if err != nil {
return s, err
}
myMsgVer, err := wire.NewMsgVersionFromConn(s.con, 0, 0)
if err != nil {
return s, err
}
err = myMsgVer.AddUserAgent("test", "zero")
if err != nil {
return s, err
}
// must set this to enable SPV stuff
myMsgVer.AddService(wire.SFNodeBloom)
// set this to enable segWit
myMsgVer.AddService(wire.SFNodeWitness)
// this actually sends
n, err := wire.WriteMessageN(s.con, myMsgVer, s.localVersion, s.TS.Param.Net)
if err != nil {
return s, err
}
s.WBytes += uint64(n)
log.Printf("wrote %d byte version message to %s\n",
n, s.con.RemoteAddr().String())
n, m, b, err := wire.ReadMessageN(s.con, s.localVersion, s.TS.Param.Net)
if err != nil {
return s, err
}
s.RBytes += uint64(n)
log.Printf("got %d byte response %x\n command: %s\n", n, b, m.Command())
mv, ok := m.(*wire.MsgVersion)
if ok {
log.Printf("connected to %s", mv.UserAgent)
}
log.Printf("remote reports version %x (dec %d)\n",
mv.ProtocolVersion, mv.ProtocolVersion)
// set remote height
s.remoteHeight = mv.LastBlock
mva := wire.NewMsgVerAck()
n, err = wire.WriteMessageN(s.con, mva, s.localVersion, s.TS.Param.Net)
if err != nil {
return s, err
}
s.WBytes += uint64(n)
s.inMsgQueue = make(chan wire.Message)
go s.incomingMessageHandler()
s.outMsgQueue = make(chan wire.Message)
go s.outgoingMessageHandler()
s.blockQueue = make(chan HashAndHeight, 32) // queue depth 32 is a thing
s.fPositives = make(chan int32, 4000) // a block full, approx
s.inWaitState = make(chan bool, 1)
go s.fPositiveHandler()
if hard {
err = s.TS.Refilter()
if err != nil {
return s, err
}
}
return s, nil
}
func (s *SPVCon) openHeaderFile(hfn string) error {
_, err := os.Stat(hfn)
if err != nil {
if os.IsNotExist(err) {
var b bytes.Buffer
err = s.TS.Param.GenesisBlock.Header.Serialize(&b)
if err != nil {
return err
}
err = ioutil.WriteFile(hfn, b.Bytes(), 0600)
if err != nil {
return err
}
log.Printf("created hardcoded genesis header at %s\n",
hfn)
}
}
s.headerFile, err = os.OpenFile(hfn, os.O_RDWR, 0600)
if err != nil {
return err
}
log.Printf("opened header file %s\n", s.headerFile.Name())
return nil
}

@ -1,202 +0,0 @@
package uspv
import (
"crypto/rand"
"encoding/hex"
"fmt"
"io/ioutil"
"os"
"strings"
"github.com/howeyc/gopass"
"github.com/roasbeef/btcd/chaincfg"
"github.com/roasbeef/btcutil/hdkeychain"
"golang.org/x/crypto/nacl/secretbox"
"golang.org/x/crypto/scrypt"
)
// warning! look at those imports! crypto! hopefully this works!
/* on-disk stored keys are 32bytes. This is good for ed25519 private keys,
for seeds for bip32, for individual secp256k1 priv keys, and so on.
32 bytes is enough for anyone.
If you want fewer bytes, put some zeroes at the end */
// LoadKeyFromFileInteractive opens the file 'filename' and presents a
// keyboard prompt for the passphrase to decrypt it. It returns the
// key if decryption works, or errors out.
func LoadKeyFromFileInteractive(filename string) (*[32]byte, error) {
a, err := os.Stat(filename)
if err != nil {
return nil, err
}
if a.Size() < 80 { // there can't be a password...
return LoadKeyFromFileArg(filename, nil)
}
fmt.Printf("passphrase: ")
pass, err := gopass.GetPasswd()
if err != nil {
return nil, err
}
fmt.Printf("\n")
return LoadKeyFromFileArg(filename, pass)
}
// LoadKeyFromFileArg opens the file and returns the key. If the key is
// unencrypted it will ignore the password argument.
func LoadKeyFromFileArg(filename string, pass []byte) (*[32]byte, error) {
priv32 := new([32]byte)
keyhex, err := ioutil.ReadFile(filename)
if err != nil {
return priv32, err
}
keyhex = []byte(strings.TrimSpace(string(keyhex)))
enckey, err := hex.DecodeString(string(keyhex))
if err != nil {
return priv32, err
}
if len(enckey) == 32 { // UNencrypted key, length 32
fmt.Printf("WARNING!! Key file not encrypted!!\n")
fmt.Printf("Anyone who can read the key file can take everything!\n")
fmt.Printf("You should start over and use a good passphrase!\n")
copy(priv32[:], enckey[:])
return priv32, nil
}
// enckey should be 72 bytes. 24 for scrypt salt/box nonce,
// 16 for box auth
if len(enckey) != 72 {
return priv32, fmt.Errorf("Key length error for %s ", filename)
}
// enckey is actually encrypted, get derived key from pass and salt
// first extract salt
salt := new([24]byte) // salt (also nonce for secretbox)
dk32 := new([32]byte) // derived key array
copy(salt[:], enckey[:24]) // first 24 bytes are scrypt salt/box nonce
dk, err := scrypt.Key(pass, salt[:], 16384, 8, 1, 32) // derive key
if err != nil {
return priv32, err
}
copy(dk32[:], dk[:]) // copy into fixed size array
// nonce for secretbox is the same as scrypt salt. Seems fine. Really.
priv, worked := secretbox.Open(nil, enckey[24:], salt, dk32)
if worked != true {
return priv32, fmt.Errorf("Decryption failed for %s ", filename)
}
copy(priv32[:], priv[:]) //copy decrypted private key into array
priv = nil // this probably doesn't do anything but... eh why not
return priv32, nil
}
// saves a 32 byte key to file, prompting for passphrase.
// if user enters empty passphrase (hits enter twice), will be saved
// in the clear.
func SaveKeyToFileInteractive(filename string, priv32 *[32]byte) error {
var match bool
var err error
var pass1, pass2 []byte
for match != true {
fmt.Printf("passphrase: ")
pass1, err = gopass.GetPasswd()
if err != nil {
return err
}
fmt.Printf("repeat passphrase: ")
pass2, err = gopass.GetPasswd()
if err != nil {
return err
}
if string(pass1) == string(pass2) {
match = true
} else {
fmt.Printf("user input error. Try again gl hf dd.\n")
}
}
fmt.Printf("\n")
return SaveKeyToFileArg(filename, priv32, pass1)
}
// saves a 32 byte key to a file, encrypting with pass.
// if pass is nil or zero length, doesn't encrypt and just saves in hex.
func SaveKeyToFileArg(filename string, priv32 *[32]byte, pass []byte) error {
if len(pass) == 0 { // zero-length pass, save unencrypted
keyhex := fmt.Sprintf("%x\n", priv32[:])
err := ioutil.WriteFile(filename, []byte(keyhex), 0600)
if err != nil {
return err
}
fmt.Printf("WARNING!! Key file not encrypted!!\n")
fmt.Printf("Anyone who can read the key file can take everything!\n")
fmt.Printf("You should start over and use a good passphrase!\n")
fmt.Printf("Saved unencrypted key at %s\n", filename)
return nil
}
salt := new([24]byte) // salt for scrypt / nonce for secretbox
dk32 := new([32]byte) // derived key from scrypt
//get 24 random bytes for scrypt salt (and secretbox nonce)
_, err := rand.Read(salt[:])
if err != nil {
return err
}
// next use the pass and salt to make a 32-byte derived key
dk, err := scrypt.Key(pass, salt[:], 16384, 8, 1, 32)
if err != nil {
return err
}
copy(dk32[:], dk[:])
enckey := append(salt[:], secretbox.Seal(nil, priv32[:], salt, dk32)...)
// enckey = append(salt, enckey...)
keyhex := fmt.Sprintf("%x\n", enckey)
err = ioutil.WriteFile(filename, []byte(keyhex), 0600)
if err != nil {
return err
}
fmt.Printf("Wrote encrypted key to %s\n", filename)
return nil
}
// ReadKeyFileToECPriv returns an extendedkey from a file.
// If there's no file there, it'll make one. If there's a password needed,
// it'll prompt for one. One stop function.
func ReadKeyFileToECPriv(
filename string, p *chaincfg.Params) (*hdkeychain.ExtendedKey, error) {
key32 := new([32]byte)
_, err := os.Stat(filename)
if err != nil {
if os.IsNotExist(err) {
// no key found, generate and save one
fmt.Printf("No file %s, generating.\n", filename)
rn, err := hdkeychain.GenerateSeed(32)
if err != nil {
return nil, err
}
copy(key32[:], rn[:])
err = SaveKeyToFileInteractive(filename, key32)
if err != nil {
return nil, err
}
} else {
// unknown error, crash
fmt.Printf("unknown\n")
return nil, err
}
}
key, err := LoadKeyFromFileInteractive(filename)
if err != nil {
return nil, err
}
rootpriv, err := hdkeychain.NewMaster(key[:], p)
if err != nil {
return nil, err
}
return rootpriv, nil
}

@ -1,167 +0,0 @@
package uspv
import (
"fmt"
"github.com/roasbeef/btcd/wire"
)
func MakeMerkleParent(left *wire.ShaHash, right *wire.ShaHash) *wire.ShaHash {
// dupes can screw things up; CVE-2012-2459. check for them
if left != nil && right != nil && left.IsEqual(right) {
fmt.Printf("DUP HASH CRASH")
return nil
}
// if left child is nil, output nil. Need this for hard mode.
if left == nil {
return nil
}
// if right is nil, hash left with itself
if right == nil {
right = left
}
// Concatenate the left and right nodes
var sha [64]byte
copy(sha[:32], left[:])
copy(sha[32:], right[:])
newSha := wire.DoubleSha256SH(sha[:])
return &newSha
}
type merkleNode struct {
p uint32 // position in the binary tree
h *wire.ShaHash // hash
}
// given n merkle leaves, how deep is the tree?
// iterate shifting left until greater than n
func treeDepth(n uint32) (e uint8) {
for ; (1 << e) < n; e++ {
}
return
}
// smallest power of 2 that can contain n
func nextPowerOfTwo(n uint32) uint32 {
return 1 << treeDepth(n) // 2^exponent
}
// check if a node is populated based on node position and size of tree
func inDeadZone(pos, size uint32) bool {
msb := nextPowerOfTwo(size)
last := size - 1 // last valid position is 1 less than size
if pos > (msb<<1)-2 { // greater than root; not even in the tree
fmt.Printf(" ?? greater than root ")
return true
}
h := msb
for pos >= h {
h = h>>1 | msb
last = last>>1 | msb
}
return pos > last
}
// take in a merkle block, parse through it, and return txids indicated
// If there's any problem return an error. Checks self-consistency only.
// doing it with a stack instead of recursion. Because...
// OK I don't know why I'm just not in to recursion OK?
func checkMBlock(m *wire.MsgMerkleBlock) ([]*wire.ShaHash, error) {
if m.Transactions == 0 {
return nil, fmt.Errorf("No transactions in merkleblock")
}
if len(m.Flags) == 0 {
return nil, fmt.Errorf("No flag bits")
}
var s []merkleNode // the stack
var r []*wire.ShaHash // slice to return; txids we care about
// set initial position to root of merkle tree
msb := nextPowerOfTwo(m.Transactions) // most significant bit possible
pos := (msb << 1) - 2 // current position in tree
var i uint8 // position in the current flag byte
var tip int
// main loop
for {
tip = len(s) - 1 // slice position of stack tip
// First check if stack operations can be performed
// is stack one filled item? that's complete.
if tip == 0 && s[0].h != nil {
if s[0].h.IsEqual(&m.Header.MerkleRoot) {
return r, nil
}
return nil, fmt.Errorf("computed root %s but expect %s\n",
s[0].h.String(), m.Header.MerkleRoot.String())
}
// is current position in the tree's dead zone? partial parent
if inDeadZone(pos, m.Transactions) {
// create merkle parent from single side (left)
s[tip-1].h = MakeMerkleParent(s[tip].h, nil)
s = s[:tip] // remove 1 from stack
pos = s[tip-1].p | 1 // move position to parent's sibling
continue
}
// does stack have 3+ items? and are last 2 items filled?
if tip > 1 && s[tip-1].h != nil && s[tip].h != nil {
//fmt.Printf("nodes %d and %d combine into %d\n",
// s[tip-1].p, s[tip].p, s[tip-2].p)
// combine two filled nodes into parent node
s[tip-2].h = MakeMerkleParent(s[tip-1].h, s[tip].h)
// remove children
s = s[:tip-1]
// move position to parent's sibling
pos = s[tip-2].p | 1
continue
}
// no stack ops to perform, so make new node from message hashes
if len(m.Hashes) == 0 {
return nil, fmt.Errorf("Ran out of hashes at position %d.", pos)
}
if len(m.Flags) == 0 {
return nil, fmt.Errorf("Ran out of flag bits.")
}
var n merkleNode // make new node
n.p = pos // set current position for new node
if pos&msb != 0 { // upper non-txid hash
if m.Flags[0]&(1<<i) == 0 { // flag bit says fill node
n.h = m.Hashes[0] // copy hash from message
m.Hashes = m.Hashes[1:] // pop off message
if pos&1 != 0 { // right side; ascend
pos = pos>>1 | msb
} else { // left side, go to sibling
pos |= 1
}
} else { // flag bit says skip; put empty on stack and descend
pos = (pos ^ msb) << 1 // descend to left
}
s = append(s, n) // push new node on stack
} else { // bottom row txid; flag bit indicates tx of interest
if pos >= m.Transactions {
// this can't happen because we check deadzone above...
return nil, fmt.Errorf("got into an invalid txid node")
}
n.h = m.Hashes[0] // copy hash from message
m.Hashes = m.Hashes[1:] // pop off message
if m.Flags[0]&(1<<i) != 0 { //txid of interest
r = append(r, n.h)
}
if pos&1 == 0 { // left side, go to sibling
pos |= 1
} // if on right side we don't move; stack ops will move next
s = append(s, n) // push new node onto the stack
}
// done with pushing onto stack; advance flag bit
i++
if i == 8 { // move to next byte
i = 0
m.Flags = m.Flags[1:]
}
}
return nil, fmt.Errorf("ran out of things to do?")
}

@ -1,245 +0,0 @@
package uspv
import (
"fmt"
"log"
"github.com/roasbeef/btcd/wire"
"github.com/roasbeef/btcutil"
)
func (s *SPVCon) incomingMessageHandler() {
for {
n, xm, _, err := wire.ReadMessageN(s.con, s.localVersion, s.TS.Param.Net)
if err != nil {
log.Printf("ReadMessageN error. Disconnecting: %s\n", err.Error())
return
}
s.RBytes += uint64(n)
// log.Printf("Got %d byte %s message\n", n, xm.Command())
switch m := xm.(type) {
case *wire.MsgVersion:
log.Printf("Got version message. Agent %s, version %d, at height %d\n",
m.UserAgent, m.ProtocolVersion, m.LastBlock)
s.remoteVersion = uint32(m.ProtocolVersion) // weird cast! bug?
case *wire.MsgVerAck:
log.Printf("Got verack. Whatever.\n")
case *wire.MsgAddr:
log.Printf("got %d addresses.\n", len(m.AddrList))
case *wire.MsgPing:
// log.Printf("Got a ping message. We should pong back or they will kick us off.")
go s.PongBack(m.Nonce)
case *wire.MsgPong:
log.Printf("Got a pong response. OK.\n")
case *wire.MsgBlock:
s.IngestBlock(m)
case *wire.MsgMerkleBlock:
s.IngestMerkleBlock(m)
case *wire.MsgHeaders: // concurrent because we keep asking for blocks
go s.HeaderHandler(m)
case *wire.MsgTx: // not concurrent! txs must be in order
s.TxHandler(m)
case *wire.MsgReject:
log.Printf("Rejected! cmd: %s code: %s tx: %s reason: %s",
m.Cmd, m.Code.String(), m.Hash.String(), m.Reason)
case *wire.MsgInv:
s.InvHandler(m)
case *wire.MsgNotFound:
log.Printf("Got not found response from remote:")
for i, thing := range m.InvList {
log.Printf("\t$d) %s: %s", i, thing.Type, thing.Hash)
}
case *wire.MsgGetData:
s.GetDataHandler(m)
default:
log.Printf("Got unknown message type %s\n", m.Command())
}
}
return
}
// this one seems kindof pointless? could get ridf of it and let
// functions call WriteMessageN themselves...
func (s *SPVCon) outgoingMessageHandler() {
for {
msg := <-s.outMsgQueue
n, err := wire.WriteMessageN(s.con, msg, s.localVersion, s.TS.Param.Net)
if err != nil {
log.Printf("Write message error: %s", err.Error())
}
s.WBytes += uint64(n)
}
return
}
// fPositiveHandler monitors false positives and when it gets enough of them,
//
func (s *SPVCon) fPositiveHandler() {
var fpAccumulator int32
for {
fpAccumulator += <-s.fPositives // blocks here
if fpAccumulator > 7 {
filt, err := s.TS.GimmeFilter()
if err != nil {
log.Printf("Filter creation error: %s\n", err.Error())
log.Printf("uhoh, crashing filter handler")
return
}
// send filter
s.SendFilter(filt)
fmt.Printf("sent filter %x\n", filt.MsgFilterLoad().Filter)
// clear the channel
finClear:
for {
select {
case x := <-s.fPositives:
fpAccumulator += x
default:
break finClear
}
}
fmt.Printf("reset %d false positives\n", fpAccumulator)
// reset accumulator
fpAccumulator = 0
}
}
}
func (s *SPVCon) HeaderHandler(m *wire.MsgHeaders) {
moar, err := s.IngestHeaders(m)
if err != nil {
log.Printf("Header error: %s\n", err.Error())
return
}
// more to get? if so, ask for them and return
if moar {
err = s.AskForHeaders()
if err != nil {
log.Printf("AskForHeaders error: %s", err.Error())
}
return
}
// no moar, done w/ headers, get blocks
err = s.AskForBlocks()
if err != nil {
log.Printf("AskForBlocks error: %s", err.Error())
return
}
}
// TxHandler takes in transaction messages that come in from either a request
// after an inv message or after a merkle block message.
func (s *SPVCon) TxHandler(m *wire.MsgTx) {
s.TS.OKMutex.Lock()
height, ok := s.TS.OKTxids[m.TxSha()]
s.TS.OKMutex.Unlock()
if !ok {
log.Printf("Tx %s unknown, will not ingest\n", m.TxSha().String())
return
}
// check for double spends
// allTxs, err := s.TS.GetAllTxs()
// if err != nil {
// log.Printf("Can't get txs from db: %s", err.Error())
// return
// }
// dubs, err := CheckDoubleSpends(m, allTxs)
// if err != nil {
// log.Printf("CheckDoubleSpends error: %s", err.Error())
// return
// }
// if len(dubs) > 0 {
// for i, dub := range dubs {
// fmt.Printf("dub %d known tx %s and new tx %s are exclusive!!!\n",
// i, dub.String(), m.TxSha().String())
// }
// }
utilTx := btcutil.NewTx(m)
if !s.HardMode || s.TS.localFilter.MatchTxAndUpdate(utilTx) {
hits, err := s.TS.Ingest(m, height)
if err != nil {
log.Printf("Incoming Tx error: %s\n", err.Error())
return
}
if hits == 0 && !s.HardMode {
log.Printf("tx %s had no hits, filter false positive.",
m.TxSha().String())
s.fPositives <- 1 // add one false positive to chan
return
}
log.Printf("tx %s ingested and matches %d utxo/adrs.",
m.TxSha().String(), hits)
}
}
// GetDataHandler responds to requests for tx data, which happen after
// advertising our txs via an inv message
func (s *SPVCon) GetDataHandler(m *wire.MsgGetData) {
log.Printf("got GetData. Contains:\n")
var sent int32
for i, thing := range m.InvList {
log.Printf("\t%d)%s : %s",
i, thing.Type.String(), thing.Hash.String())
// separate wittx and tx. needed / combine?
// does the same thing right now
if thing.Type == wire.InvTypeWitnessTx {
tx, err := s.TS.GetTx(&thing.Hash)
if err != nil {
log.Printf("error getting tx %s: %s",
thing.Hash.String(), err.Error())
}
s.outMsgQueue <- tx
sent++
continue
}
if thing.Type == wire.InvTypeTx {
tx, err := s.TS.GetTx(&thing.Hash)
if err != nil {
log.Printf("error getting tx %s: %s",
thing.Hash.String(), err.Error())
}
//tx.Flags = 0x00 // dewitnessify
s.outMsgQueue <- tx
sent++
continue
}
// didn't match, so it's not something we're responding to
log.Printf("We only respond to tx requests, ignoring")
}
log.Printf("sent %d of %d requested items", sent, len(m.InvList))
}
func (s *SPVCon) InvHandler(m *wire.MsgInv) {
log.Printf("got inv. Contains:\n")
for i, thing := range m.InvList {
log.Printf("\t%d)%s : %s",
i, thing.Type.String(), thing.Hash.String())
if thing.Type == wire.InvTypeTx {
if !s.Ironman { // ignore tx invs in ironman mode
// new tx, OK it at 0 and request
s.TS.AddTxid(&thing.Hash, 0) // unconfirmed
s.AskForTx(thing.Hash)
}
}
if thing.Type == wire.InvTypeBlock { // new block what to do?
select {
case <-s.inWaitState:
// start getting headers
fmt.Printf("asking for headers due to inv block\n")
err := s.AskForHeaders()
if err != nil {
log.Printf("AskForHeaders error: %s", err.Error())
}
default:
// drop it as if its component particles had high thermal energies
fmt.Printf("inv block but ignoring; not synched\n")
}
}
}
}

@ -1,461 +0,0 @@
package uspv
import (
"bytes"
"fmt"
"log"
"sort"
"github.com/roasbeef/btcd/blockchain"
"github.com/roasbeef/btcd/txscript"
"github.com/roasbeef/btcd/wire"
"github.com/roasbeef/btcutil"
"github.com/roasbeef/btcutil/bloom"
"github.com/roasbeef/btcutil/hdkeychain"
"github.com/roasbeef/btcutil/txsort"
)
func (s *SPVCon) PongBack(nonce uint64) {
mpong := wire.NewMsgPong(nonce)
s.outMsgQueue <- mpong
return
}
func (s *SPVCon) SendFilter(f *bloom.Filter) {
s.outMsgQueue <- f.MsgFilterLoad()
return
}
// Rebroadcast sends an inv message of all the unconfirmed txs the db is
// aware of. This is called after every sync. Only txids so hopefully not
// too annoying for nodes.
func (s *SPVCon) Rebroadcast() {
// get all unconfirmed txs
invMsg, err := s.TS.GetPendingInv()
if err != nil {
log.Printf("Rebroadcast error: %s", err.Error())
}
if len(invMsg.InvList) == 0 { // nothing to broadcast, so don't
return
}
s.outMsgQueue <- invMsg
return
}
// make utxo slices sortable -- same as txsort
type utxoSlice []Utxo
// Sort utxos just like txins -- Len, Less, Swap
func (s utxoSlice) Len() int { return len(s) }
func (s utxoSlice) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
// outpoint sort; First input hash (reversed / rpc-style), then index.
func (s utxoSlice) Less(i, j int) bool {
// Input hashes are the same, so compare the index.
ihash := s[i].Op.Hash
jhash := s[j].Op.Hash
if ihash == jhash {
return s[i].Op.Index < s[j].Op.Index
}
// At this point, the hashes are not equal, so reverse them to
// big-endian and return the result of the comparison.
const hashSize = wire.HashSize
for b := 0; b < hashSize/2; b++ {
ihash[b], ihash[hashSize-1-b] = ihash[hashSize-1-b], ihash[b]
jhash[b], jhash[hashSize-1-b] = jhash[hashSize-1-b], jhash[b]
}
return bytes.Compare(ihash[:], jhash[:]) == -1
}
type SortableUtxoSlice []Utxo
// utxoByAmts get sorted by utxo value. also put unconfirmed last
func (s SortableUtxoSlice) Len() int { return len(s) }
func (s SortableUtxoSlice) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
// height 0 means your lesser
func (s SortableUtxoSlice) Less(i, j int) bool {
if s[i].AtHeight == 0 && s[j].AtHeight > 0 {
return true
}
if s[j].AtHeight == 0 && s[i].AtHeight > 0 {
return false
}
return s[i].Value < s[j].Value
}
func (s *SPVCon) NewOutgoingTx(tx *wire.MsgTx) error {
txid := tx.TxSha()
// assign height of zero for txs we create
err := s.TS.AddTxid(&txid, 0)
if err != nil {
return err
}
_, err = s.TS.Ingest(tx, 0) // our own tx; don't keep track of false positives
if err != nil {
return err
}
// make an inv message instead of a tx message to be polite
iv1 := wire.NewInvVect(wire.InvTypeWitnessTx, &txid)
invMsg := wire.NewMsgInv()
err = invMsg.AddInvVect(iv1)
if err != nil {
return err
}
s.outMsgQueue <- invMsg
return nil
}
func (s *SPVCon) SendOne(u Utxo, adr btcutil.Address) error {
// fixed fee
fee := int64(5000)
sendAmt := u.Value - fee
tx := wire.NewMsgTx() // make new tx
// add single output
outAdrScript, err := txscript.PayToAddrScript(adr)
if err != nil {
return err
}
// make user specified txout and add to tx
txout := wire.NewTxOut(sendAmt, outAdrScript)
tx.AddTxOut(txout)
var prevPKs []byte
if u.IsWit {
//tx.Flags = 0x01
wa, err := btcutil.NewAddressWitnessPubKeyHash(
s.TS.Adrs[u.KeyIdx].PkhAdr.ScriptAddress(), s.TS.Param)
prevPKs, err = txscript.PayToAddrScript(wa)
if err != nil {
return err
}
} else { // otherwise generate directly
prevPKs, err = txscript.PayToAddrScript(
s.TS.Adrs[u.KeyIdx].PkhAdr)
if err != nil {
return err
}
}
tx.AddTxIn(wire.NewTxIn(&u.Op, prevPKs, nil))
var sig []byte
var wit [][]byte
hCache := txscript.NewTxSigHashes(tx)
child, err := s.TS.rootPrivKey.Child(u.KeyIdx + hdkeychain.HardenedKeyStart)
if err != nil {
return err
}
priv, err := child.ECPrivKey()
if err != nil {
return err
}
// This is where witness based sighash types need to happen
// sign into stash
if u.IsWit {
wit, err = txscript.WitnessScript(
tx, hCache, 0, u.Value, tx.TxIn[0].SignatureScript,
txscript.SigHashAll, priv, true)
if err != nil {
return err
}
} else {
sig, err = txscript.SignatureScript(
tx, 0, tx.TxIn[0].SignatureScript,
txscript.SigHashAll, priv, true)
if err != nil {
return err
}
}
// swap sigs into sigScripts in txins
if sig != nil {
tx.TxIn[0].SignatureScript = sig
}
if wit != nil {
tx.TxIn[0].Witness = wit
tx.TxIn[0].SignatureScript = nil
}
return s.NewOutgoingTx(tx)
}
// SendCoins does send coins, but it's very rudimentary
// wit makes it into p2wpkh. Which is not yet spendable.
func (s *SPVCon) SendCoins(adrs []btcutil.Address, sendAmts []int64) error {
if len(adrs) != len(sendAmts) {
return fmt.Errorf("%d addresses and %d amounts", len(adrs), len(sendAmts))
}
var err error
var score, totalSend, fee int64
dustCutoff := int64(20000) // below this amount, just give to miners
satPerByte := int64(80) // satoshis per byte fee; have as arg later
rawUtxos, err := s.TS.GetAllUtxos()
if err != nil {
return err
}
var allUtxos SortableUtxoSlice
// start with utxos sorted by value.
for _, utxo := range rawUtxos {
score += utxo.Value
allUtxos = append(allUtxos, *utxo)
}
// smallest and unconfirmed last (because it's reversed)
sort.Sort(sort.Reverse(allUtxos))
// sort.Reverse(allUtxos)
for _, amt := range sendAmts {
totalSend += amt
}
// important rule in bitcoin, output total > input total is invalid.
if totalSend > score {
return fmt.Errorf("trying to send %d but %d available.",
totalSend, score)
}
tx := wire.NewMsgTx() // make new tx
// add non-change (arg) outputs
for i, adr := range adrs {
// make address script 76a914...88ac or 0014...
outAdrScript, err := txscript.PayToAddrScript(adr)
if err != nil {
return err
}
// make user specified txout and add to tx
txout := wire.NewTxOut(sendAmts[i], outAdrScript)
tx.AddTxOut(txout)
}
// generate a utxo slice for your inputs
var ins utxoSlice
// add utxos until we've had enough
nokori := totalSend // nokori is how much is needed on input side
for _, utxo := range allUtxos {
// skip unconfirmed. Or de-prioritize?
// if utxo.AtHeight == 0 {
// continue
// }
// yeah, lets add this utxo!
ins = append(ins, utxo)
// as we add utxos, fill in sigscripts
// generate previous pkscripts (subscritpt?) for all utxos
// then make txins with the utxo and prevpk, and insert them into the tx
// these are all zeroed out during signing but it's an easy way to keep track
var prevPKs []byte
if utxo.IsWit {
//tx.Flags = 0x01
wa, err := btcutil.NewAddressWitnessPubKeyHash(
s.TS.Adrs[utxo.KeyIdx].PkhAdr.ScriptAddress(), s.TS.Param)
prevPKs, err = txscript.PayToAddrScript(wa)
if err != nil {
return err
}
} else { // otherwise generate directly
prevPKs, err = txscript.PayToAddrScript(
s.TS.Adrs[utxo.KeyIdx].PkhAdr)
if err != nil {
return err
}
}
tx.AddTxIn(wire.NewTxIn(&utxo.Op, prevPKs, nil))
nokori -= utxo.Value
// if nokori is positive, don't bother checking fee yet.
if nokori < 0 {
fee = EstFee(tx, satPerByte)
if nokori < -fee { // done adding utxos: nokori below negative est. fee
break
}
}
}
// see if there's enough left to also add a change output
changeOld, err := s.TS.NewAdr() // change is witnessy
if err != nil {
return err
}
changeAdr, err := btcutil.NewAddressWitnessPubKeyHash(
changeOld.ScriptAddress(), s.TS.Param)
if err != nil {
return err
}
changeScript, err := txscript.PayToAddrScript(changeAdr)
if err != nil {
return err
}
changeOut := wire.NewTxOut(0, changeScript)
tx.AddTxOut(changeOut)
fee = EstFee(tx, satPerByte)
changeOut.Value = -(nokori + fee)
if changeOut.Value < dustCutoff {
// remove last output (change) : not worth it
tx.TxOut = tx.TxOut[:len(tx.TxOut)-1]
}
// sort utxos on the input side. use this instead of txsort
// because we want to remember which keys are associated with which inputs
sort.Sort(ins)
// sort tx -- this only will change txouts since inputs are already sorted
txsort.InPlaceSort(tx)
// tx is ready for signing,
sigStash := make([][]byte, len(ins))
witStash := make([][][]byte, len(ins))
// generate tx-wide hashCache for segwit stuff
// middle index number doesn't matter for sighashAll.
hCache := txscript.NewTxSigHashes(tx)
for i, txin := range tx.TxIn {
// pick key
child, err := s.TS.rootPrivKey.Child(
ins[i].KeyIdx + hdkeychain.HardenedKeyStart)
if err != nil {
return err
}
priv, err := child.ECPrivKey()
if err != nil {
return err
}
// This is where witness based sighash types need to happen
// sign into stash
if ins[i].IsWit {
witStash[i], err = txscript.WitnessScript(
tx, hCache, i, ins[i].Value, txin.SignatureScript,
txscript.SigHashAll, priv, true)
if err != nil {
return err
}
} else {
sigStash[i], err = txscript.SignatureScript(
tx, i, txin.SignatureScript,
txscript.SigHashAll, priv, true)
if err != nil {
return err
}
}
}
// swap sigs into sigScripts in txins
for i, txin := range tx.TxIn {
if sigStash[i] != nil {
txin.SignatureScript = sigStash[i]
}
if witStash[i] != nil {
txin.Witness = witStash[i]
txin.SignatureScript = nil
}
}
// fmt.Printf("tx: %s", TxToString(tx))
// buf := bytes.NewBuffer(make([]byte, 0, tx.SerializeSize()))
// send it out on the wire. hope it gets there.
// we should deal with rejects. Don't yet.
err = s.NewOutgoingTx(tx)
if err != nil {
return err
}
return nil
}
// EstFee gives a fee estimate based on a tx and a sat/Byte target.
// The TX should have all outputs, including the change address already
// populated (with potentially 0 amount. Also it should have all inputs
// populated, but inputs don't need to have sigscripts or witnesses
// (it'll guess the sizes of sigs/wits that arent' filled in).
func EstFee(otx *wire.MsgTx, spB int64) int64 {
mtsig := make([]byte, 72)
mtpub := make([]byte, 33)
tx := otx.Copy()
// iterate through txins, replacing subscript sigscripts with noise
// sigs or witnesses
for _, txin := range tx.TxIn {
// check wpkh
if len(txin.SignatureScript) == 22 &&
txin.SignatureScript[0] == 0x00 && txin.SignatureScript[1] == 0x14 {
txin.SignatureScript = nil
txin.Witness = make([][]byte, 2)
txin.Witness[0] = mtsig
txin.Witness[1] = mtpub
} else if len(txin.SignatureScript) == 34 &&
txin.SignatureScript[0] == 0x00 && txin.SignatureScript[1] == 0x20 {
// p2wsh -- sig lenght is a total guess!
txin.SignatureScript = nil
txin.Witness = make([][]byte, 3)
// 3 sigs? totally guessing here
txin.Witness[0] = mtsig
txin.Witness[1] = mtsig
txin.Witness[2] = mtsig
} else {
// assume everything else is p2pkh. Even though it's not
txin.Witness = nil
txin.SignatureScript = make([]byte, 105) // len of p2pkh sigscript
}
}
fmt.Printf(TxToString(tx))
size := int64(blockchain.GetTxVirtualSize(btcutil.NewTx(tx)))
fmt.Printf("%d spB, est vsize %d, fee %d\n", spB, size, size*spB)
return size * spB
}
// SignThis isn't used anymore...
func (t *TxStore) SignThis(tx *wire.MsgTx) error {
fmt.Printf("-= SignThis =-\n")
// sort tx before signing.
txsort.InPlaceSort(tx)
sigs := make([][]byte, len(tx.TxIn))
// first iterate over each input
for j, in := range tx.TxIn {
for k := uint32(0); k < uint32(len(t.Adrs)); k++ {
child, err := t.rootPrivKey.Child(k + hdkeychain.HardenedKeyStart)
if err != nil {
return err
}
myadr, err := child.Address(t.Param)
if err != nil {
return err
}
adrScript, err := txscript.PayToAddrScript(myadr)
if err != nil {
return err
}
if bytes.Equal(adrScript, in.SignatureScript) {
fmt.Printf("Hit; key %d matches input %d. Signing.\n", k, j)
priv, err := child.ECPrivKey()
if err != nil {
return err
}
sigs[j], err = txscript.SignatureScript(
tx, j, in.SignatureScript, txscript.SigHashAll, priv, true)
if err != nil {
return err
}
break
}
}
}
for i, s := range sigs {
if s != nil {
tx.TxIn[i].SignatureScript = s
}
}
return nil
}

@ -1,363 +0,0 @@
package uspv
import (
"bytes"
"encoding/binary"
"fmt"
"log"
"sync"
"github.com/roasbeef/btcd/blockchain"
"github.com/roasbeef/btcd/chaincfg"
"github.com/boltdb/bolt"
"github.com/roasbeef/btcd/wire"
"github.com/roasbeef/btcutil"
"github.com/roasbeef/btcutil/bloom"
"github.com/roasbeef/btcutil/hdkeychain"
)
type TxStore struct {
OKTxids map[wire.ShaHash]int32 // known good txids and their heights
OKMutex sync.Mutex
Adrs []MyAdr // endeavouring to acquire capital
StateDB *bolt.DB // place to write all this down
localFilter *bloom.Filter // local bloom filter for hard mode
// Params live here, not SCon
Param *chaincfg.Params // network parameters (testnet3, testnetL)
// From here, comes everything. It's a secret to everybody.
rootPrivKey *hdkeychain.ExtendedKey
}
type Utxo struct { // cash money.
Op wire.OutPoint // where
// all the info needed to spend
AtHeight int32 // block height where this tx was confirmed, 0 for unconf
KeyIdx uint32 // index for private key needed to sign / spend
Value int64 // higher is better
// IsCoinbase bool // can't spend for a while
IsWit bool // true if p2wpkh output
}
// Stxo is a utxo that has moved on.
type Stxo struct {
Utxo // when it used to be a utxo
SpendHeight int32 // height at which it met its demise
SpendTxid wire.ShaHash // the tx that consumed it
}
type MyAdr struct { // an address I have the private key for
PkhAdr btcutil.Address
KeyIdx uint32 // index for private key needed to sign / spend
// ^^ this is kindof redundant because it'll just be their position
// inside the Adrs slice, right? leave for now
}
func NewTxStore(rootkey *hdkeychain.ExtendedKey, p *chaincfg.Params) TxStore {
var txs TxStore
txs.rootPrivKey = rootkey
txs.Param = p
txs.OKTxids = make(map[wire.ShaHash]int32)
return txs
}
// add txid of interest
func (t *TxStore) AddTxid(txid *wire.ShaHash, height int32) error {
if txid == nil {
return fmt.Errorf("tried to add nil txid")
}
log.Printf("added %s to OKTxids at height %d\n", txid.String(), height)
t.OKMutex.Lock()
t.OKTxids[*txid] = height
t.OKMutex.Unlock()
return nil
}
// ... or I'm gonna fade away
func (t *TxStore) GimmeFilter() (*bloom.Filter, error) {
if len(t.Adrs) == 0 {
return nil, fmt.Errorf("no address to filter for")
}
// get all utxos to add outpoints to filter
allUtxos, err := t.GetAllUtxos()
if err != nil {
return nil, err
}
elem := uint32(len(t.Adrs) + len(allUtxos))
f := bloom.NewFilter(elem, 0, 0.000001, wire.BloomUpdateAll)
// note there could be false positives since we're just looking
// for the 20 byte PKH without the opcodes.
for _, a := range t.Adrs { // add 20-byte pubkeyhash
f.Add(a.PkhAdr.ScriptAddress())
}
for _, u := range allUtxos {
f.AddOutPoint(&u.Op)
}
return f, nil
}
// GetDoubleSpends takes a transaction and compares it with
// all transactions in the db. It returns a slice of all txids in the db
// which are double spent by the received tx.
func CheckDoubleSpends(
argTx *wire.MsgTx, txs []*wire.MsgTx) ([]*wire.ShaHash, error) {
var dubs []*wire.ShaHash // slice of all double-spent txs
argTxid := argTx.TxSha()
for _, compTx := range txs {
compTxid := compTx.TxSha()
// check if entire tx is dup
if argTxid.IsEqual(&compTxid) {
return nil, fmt.Errorf("tx %s is dup", argTxid.String())
}
// not dup, iterate through inputs of argTx
for _, argIn := range argTx.TxIn {
// iterate through inputs of compTx
for _, compIn := range compTx.TxIn {
if OutPointsEqual(
argIn.PreviousOutPoint, compIn.PreviousOutPoint) {
// found double spend
dubs = append(dubs, &compTxid)
break // back to argIn loop
}
}
}
}
return dubs, nil
}
// TxToString prints out some info about a transaction. for testing / debugging
func TxToString(tx *wire.MsgTx) string {
str := fmt.Sprintf("size %d vsize %d wsize %d locktime %d txid %s\n",
tx.SerializeSize(), blockchain.GetTxVirtualSize(btcutil.NewTx(tx)),
tx.SerializeSize(), tx.LockTime, tx.TxSha().String())
for i, in := range tx.TxIn {
str += fmt.Sprintf("Input %d spends %s\n", i, in.PreviousOutPoint.String())
str += fmt.Sprintf("\tSigScript: %x\n", in.SignatureScript)
for j, wit := range in.Witness {
str += fmt.Sprintf("\twitness %d: %x\n", j, wit)
}
}
for i, out := range tx.TxOut {
if out != nil {
str += fmt.Sprintf("output %d script: %x amt: %d\n",
i, out.PkScript, out.Value)
} else {
str += fmt.Sprintf("output %d nil (WARNING)\n", i)
}
}
return str
}
// need this because before I was comparing pointers maybe?
// so they were the same outpoint but stored in 2 places so false negative?
func OutPointsEqual(a, b wire.OutPoint) bool {
if !a.Hash.IsEqual(&b.Hash) {
return false
}
return a.Index == b.Index
}
/*----- serialization for tx outputs ------- */
// outPointToBytes turns an outpoint into 36 bytes.
func outPointToBytes(op *wire.OutPoint) ([]byte, error) {
var buf bytes.Buffer
_, err := buf.Write(op.Hash.Bytes())
if err != nil {
return nil, err
}
// write 4 byte outpoint index within the tx to spend
err = binary.Write(&buf, binary.BigEndian, op.Index)
if err != nil {
return nil, err
}
return buf.Bytes(), nil
}
/*----- serialization for utxos ------- */
/* Utxos serialization:
byte length desc at offset
32 txid 0
4 idx 32
4 height 36
4 keyidx 40
8 amt 44
1 flag 52
end len 53
*/
// ToBytes turns a Utxo into some bytes.
// note that the txid is the first 36 bytes and in our use cases will be stripped
// off, but is left here for other applications
func (u *Utxo) ToBytes() ([]byte, error) {
var buf bytes.Buffer
// write 32 byte txid of the utxo
_, err := buf.Write(u.Op.Hash.Bytes())
if err != nil {
return nil, err
}
// write 4 byte outpoint index within the tx to spend
err = binary.Write(&buf, binary.BigEndian, u.Op.Index)
if err != nil {
return nil, err
}
// write 4 byte height of utxo
err = binary.Write(&buf, binary.BigEndian, u.AtHeight)
if err != nil {
return nil, err
}
// write 4 byte key index of utxo
err = binary.Write(&buf, binary.BigEndian, u.KeyIdx)
if err != nil {
return nil, err
}
// write 8 byte amount of money at the utxo
err = binary.Write(&buf, binary.BigEndian, u.Value)
if err != nil {
return nil, err
}
// last byte indicates tx witness flags ( tx[5] from serialized tx)
// write a 1 at the end for p2wpkh (same as flags byte)
witByte := byte(0x00)
if u.IsWit {
witByte = 0x01
}
err = buf.WriteByte(witByte)
if err != nil {
return nil, err
}
return buf.Bytes(), nil
}
// UtxoFromBytes turns bytes into a Utxo. Note it wants the txid and outindex
// in the first 36 bytes, which isn't stored that way in the boldDB,
// but can be easily appended.
func UtxoFromBytes(b []byte) (Utxo, error) {
var u Utxo
if b == nil {
return u, fmt.Errorf("nil input slice")
}
buf := bytes.NewBuffer(b)
if buf.Len() < 53 { // utxos are 53 bytes
return u, fmt.Errorf("Got %d bytes for utxo, expect 53", buf.Len())
}
// read 32 byte txid
err := u.Op.Hash.SetBytes(buf.Next(32))
if err != nil {
return u, err
}
// read 4 byte outpoint index within the tx to spend
err = binary.Read(buf, binary.BigEndian, &u.Op.Index)
if err != nil {
return u, err
}
// read 4 byte height of utxo
err = binary.Read(buf, binary.BigEndian, &u.AtHeight)
if err != nil {
return u, err
}
// read 4 byte key index of utxo
err = binary.Read(buf, binary.BigEndian, &u.KeyIdx)
if err != nil {
return u, err
}
// read 8 byte amount of money at the utxo
err = binary.Read(buf, binary.BigEndian, &u.Value)
if err != nil {
return u, err
}
// read 1 byte witness flags
witByte, err := buf.ReadByte()
if err != nil {
return u, err
}
if witByte != 0x00 {
u.IsWit = true
}
return u, nil
}
/*----- serialization for stxos ------- */
/* Stxo serialization:
byte length desc at offset
53 utxo 0
4 sheight 53
32 stxid 57
end len 89
*/
// ToBytes turns an Stxo into some bytes.
// prevUtxo serialization, then spendheight [4], spendtxid [32]
func (s *Stxo) ToBytes() ([]byte, error) {
var buf bytes.Buffer
// first serialize the utxo part
uBytes, err := s.Utxo.ToBytes()
if err != nil {
return nil, err
}
// write that into the buffer first
_, err = buf.Write(uBytes)
if err != nil {
return nil, err
}
// write 4 byte height where the txo was spent
err = binary.Write(&buf, binary.BigEndian, s.SpendHeight)
if err != nil {
return nil, err
}
// write 32 byte txid of the spending transaction
_, err = buf.Write(s.SpendTxid.Bytes())
if err != nil {
return nil, err
}
return buf.Bytes(), nil
}
// StxoFromBytes turns bytes into a Stxo.
// first take the first 53 bytes as a utxo, then the next 36 for how it's spent.
func StxoFromBytes(b []byte) (Stxo, error) {
var s Stxo
if len(b) < 89 {
return s, fmt.Errorf("Got %d bytes for stxo, expect 89", len(b))
}
u, err := UtxoFromBytes(b[:53])
if err != nil {
return s, err
}
s.Utxo = u // assign the utxo
buf := bytes.NewBuffer(b[53:]) // make buffer for spend data
// read 4 byte spend height
err = binary.Read(buf, binary.BigEndian, &s.SpendHeight)
if err != nil {
return s, err
}
// read 32 byte txid
err = s.SpendTxid.SetBytes(buf.Next(32))
if err != nil {
return s, err
}
return s, nil
}

@ -1,498 +0,0 @@
package uspv
import (
"bytes"
"encoding/binary"
"fmt"
"github.com/roasbeef/btcd/blockchain"
"github.com/roasbeef/btcd/txscript"
"github.com/roasbeef/btcd/wire"
"github.com/roasbeef/btcutil"
"github.com/roasbeef/btcutil/hdkeychain"
"github.com/boltdb/bolt"
)
var (
BKTUtxos = []byte("DuffelBag") // leave the rest to collect interest
BKTStxos = []byte("SpentTxs") // for bookkeeping
BKTTxns = []byte("Txns") // all txs we care about, for replays
BKTState = []byte("MiscState") // last state of DB
// these are in the state bucket
KEYNumKeys = []byte("NumKeys") // number of p2pkh keys used
KEYTipHeight = []byte("TipHeight") // height synced to
)
func (ts *TxStore) OpenDB(filename string) error {
var err error
var numKeys uint32
ts.StateDB, err = bolt.Open(filename, 0644, nil)
if err != nil {
return err
}
// create buckets if they're not already there
err = ts.StateDB.Update(func(btx *bolt.Tx) error {
_, err = btx.CreateBucketIfNotExists(BKTUtxos)
if err != nil {
return err
}
_, err = btx.CreateBucketIfNotExists(BKTStxos)
if err != nil {
return err
}
_, err = btx.CreateBucketIfNotExists(BKTTxns)
if err != nil {
return err
}
sta, err := btx.CreateBucketIfNotExists(BKTState)
if err != nil {
return err
}
numKeysBytes := sta.Get(KEYNumKeys)
if numKeysBytes != nil { // NumKeys exists, read into uint32
buf := bytes.NewBuffer(numKeysBytes)
err := binary.Read(buf, binary.BigEndian, &numKeys)
if err != nil {
return err
}
fmt.Printf("db says %d keys\n", numKeys)
} else { // no adrs yet, make it 1 (why...?)
numKeys = 1
var buf bytes.Buffer
err = binary.Write(&buf, binary.BigEndian, numKeys)
if err != nil {
return err
}
err = sta.Put(KEYNumKeys, buf.Bytes())
if err != nil {
return err
}
}
return nil
})
if err != nil {
return err
}
return ts.PopulateAdrs(numKeys)
}
// NewAdr creates a new, never before seen address, and increments the
// DB counter as well as putting it in the ram Adrs store, and returns it
func (ts *TxStore) NewAdr() (btcutil.Address, error) {
if ts.Param == nil {
return nil, fmt.Errorf("NewAdr error: nil param")
}
priv := new(hdkeychain.ExtendedKey)
var err error
var n uint32
var nAdr btcutil.Address
n = uint32(len(ts.Adrs))
priv, err = ts.rootPrivKey.Child(n + hdkeychain.HardenedKeyStart)
if err != nil {
return nil, err
}
nAdr, err = priv.Address(ts.Param)
if err != nil {
return nil, err
}
// total number of keys (now +1) into 4 bytes
var buf bytes.Buffer
err = binary.Write(&buf, binary.BigEndian, n+1)
if err != nil {
return nil, err
}
// write to db file
err = ts.StateDB.Update(func(btx *bolt.Tx) error {
sta := btx.Bucket(BKTState)
return sta.Put(KEYNumKeys, buf.Bytes())
})
if err != nil {
return nil, err
}
// add in to ram.
var ma MyAdr
ma.PkhAdr = nAdr
ma.KeyIdx = n
ts.Adrs = append(ts.Adrs, ma)
ts.localFilter.Add(ma.PkhAdr.ScriptAddress())
return nAdr, nil
}
// SetDBSyncHeight sets sync height of the db, indicated the latest block
// of which it has ingested all the transactions.
func (ts *TxStore) SetDBSyncHeight(n int32) error {
var buf bytes.Buffer
_ = binary.Write(&buf, binary.BigEndian, n)
return ts.StateDB.Update(func(btx *bolt.Tx) error {
sta := btx.Bucket(BKTState)
return sta.Put(KEYTipHeight, buf.Bytes())
})
}
// SyncHeight returns the chain height to which the db has synced
func (ts *TxStore) GetDBSyncHeight() (int32, error) {
var n int32
err := ts.StateDB.View(func(btx *bolt.Tx) error {
sta := btx.Bucket(BKTState)
if sta == nil {
return fmt.Errorf("no state")
}
t := sta.Get(KEYTipHeight)
if t == nil { // no height written, so 0
return nil
}
// read 4 byte tip height to n
err := binary.Read(bytes.NewBuffer(t), binary.BigEndian, &n)
if err != nil {
return err
}
return nil
})
if err != nil {
return 0, err
}
return n, nil
}
// GetAllUtxos returns a slice of all utxos known to the db. empty slice is OK.
func (ts *TxStore) GetAllUtxos() ([]*Utxo, error) {
var utxos []*Utxo
err := ts.StateDB.View(func(btx *bolt.Tx) error {
duf := btx.Bucket(BKTUtxos)
if duf == nil {
return fmt.Errorf("no duffel bag")
}
return duf.ForEach(func(k, v []byte) error {
// have to copy k and v here, otherwise append will crash it.
// not quite sure why but append does weird stuff I guess.
// create a new utxo
x := make([]byte, len(k)+len(v))
copy(x, k)
copy(x[len(k):], v)
newU, err := UtxoFromBytes(x)
if err != nil {
return err
}
// and add it to ram
utxos = append(utxos, &newU)
return nil
})
return nil
})
if err != nil {
return nil, err
}
return utxos, nil
}
// GetAllStxos returns a slice of all stxos known to the db. empty slice is OK.
func (ts *TxStore) GetAllStxos() ([]*Stxo, error) {
// this is almost the same as GetAllUtxos but whatever, it'd be more
// complicated to make one contain the other or something
var stxos []*Stxo
err := ts.StateDB.View(func(btx *bolt.Tx) error {
old := btx.Bucket(BKTStxos)
if old == nil {
return fmt.Errorf("no old txos")
}
return old.ForEach(func(k, v []byte) error {
// have to copy k and v here, otherwise append will crash it.
// not quite sure why but append does weird stuff I guess.
// create a new stxo
x := make([]byte, len(k)+len(v))
copy(x, k)
copy(x[len(k):], v)
newS, err := StxoFromBytes(x)
if err != nil {
return err
}
// and add it to ram
stxos = append(stxos, &newS)
return nil
})
return nil
})
if err != nil {
return nil, err
}
return stxos, nil
}
// GetTx takes a txid and returns the transaction. If we have it.
func (ts *TxStore) GetTx(txid *wire.ShaHash) (*wire.MsgTx, error) {
rtx := wire.NewMsgTx()
err := ts.StateDB.View(func(btx *bolt.Tx) error {
txns := btx.Bucket(BKTTxns)
if txns == nil {
return fmt.Errorf("no transactions in db")
}
txbytes := txns.Get(txid.Bytes())
if txbytes == nil {
return fmt.Errorf("tx %x not in db", txid.String())
}
buf := bytes.NewBuffer(txbytes)
return rtx.Deserialize(buf)
})
if err != nil {
return nil, err
}
return rtx, nil
}
// GetTx takes a txid and returns the transaction. If we have it.
func (ts *TxStore) GetAllTxs() ([]*wire.MsgTx, error) {
var rtxs []*wire.MsgTx
err := ts.StateDB.View(func(btx *bolt.Tx) error {
txns := btx.Bucket(BKTTxns)
if txns == nil {
return fmt.Errorf("no transactions in db")
}
return txns.ForEach(func(k, v []byte) error {
tx := wire.NewMsgTx()
buf := bytes.NewBuffer(v)
err := tx.Deserialize(buf)
if err != nil {
return err
}
rtxs = append(rtxs, tx)
return nil
})
})
if err != nil {
return nil, err
}
return rtxs, nil
}
// GetPendingInv returns an inv message containing all txs known to the
// db which are at height 0 (not known to be confirmed).
// This can be useful on startup or to rebroadcast unconfirmed txs.
func (ts *TxStore) GetPendingInv() (*wire.MsgInv, error) {
// use a map (really a set) do avoid dupes
txidMap := make(map[wire.ShaHash]struct{})
utxos, err := ts.GetAllUtxos() // get utxos from db
if err != nil {
return nil, err
}
stxos, err := ts.GetAllStxos() // get stxos from db
if err != nil {
return nil, err
}
// iterate through utxos, adding txids of anything with height 0
for _, utxo := range utxos {
if utxo.AtHeight == 0 {
txidMap[utxo.Op.Hash] = struct{}{} // adds to map
}
}
// do the same with stxos based on height at which spent
for _, stxo := range stxos {
if stxo.SpendHeight == 0 {
txidMap[stxo.SpendTxid] = struct{}{}
}
}
invMsg := wire.NewMsgInv()
for txid := range txidMap {
item := wire.NewInvVect(wire.InvTypeTx, &txid)
err = invMsg.AddInvVect(item)
if err != nil {
if err != nil {
return nil, err
}
}
}
// return inv message with all txids (maybe none)
return invMsg, nil
}
// PopulateAdrs just puts a bunch of adrs in ram; it doesn't touch the DB
func (ts *TxStore) PopulateAdrs(lastKey uint32) error {
for k := uint32(0); k < lastKey; k++ {
priv, err := ts.rootPrivKey.Child(k + hdkeychain.HardenedKeyStart)
if err != nil {
return err
}
newAdr, err := priv.Address(ts.Param)
if err != nil {
return err
}
var ma MyAdr
ma.PkhAdr = newAdr
ma.KeyIdx = k
ts.Adrs = append(ts.Adrs, ma)
}
return nil
}
// Ingest puts a tx into the DB atomically. This can result in a
// gain, a loss, or no result. Gain or loss in satoshis is returned.
func (ts *TxStore) Ingest(tx *wire.MsgTx, height int32) (uint32, error) {
var hits uint32
var err error
var nUtxoBytes [][]byte
// tx has been OK'd by SPV; check tx sanity
utilTx := btcutil.NewTx(tx) // convert for validation
// checks basic stuff like there are inputs and ouputs
err = blockchain.CheckTransactionSanity(utilTx)
if err != nil {
return hits, err
}
// note that you can't check signatures; this is SPV.
// 0 conf SPV means pretty much nothing. Anyone can say anything.
spentOPs := make([][]byte, len(tx.TxIn))
// before entering into db, serialize all inputs of the ingested tx
for i, txin := range tx.TxIn {
spentOPs[i], err = outPointToBytes(&txin.PreviousOutPoint)
if err != nil {
return hits, err
}
}
// go through txouts, and then go through addresses to match
// generate PKscripts for all addresses
wPKscripts := make([][]byte, len(ts.Adrs))
aPKscripts := make([][]byte, len(ts.Adrs))
for i, _ := range ts.Adrs {
// iterate through all our addresses
// convert regular address to witness address. (split adrs later)
wa, err := btcutil.NewAddressWitnessPubKeyHash(
ts.Adrs[i].PkhAdr.ScriptAddress(), ts.Param)
if err != nil {
return hits, err
}
wPKscripts[i], err = txscript.PayToAddrScript(wa)
if err != nil {
return hits, err
}
aPKscripts[i], err = txscript.PayToAddrScript(ts.Adrs[i].PkhAdr)
if err != nil {
return hits, err
}
}
cachedSha := tx.TxSha()
// iterate through all outputs of this tx, see if we gain
for i, out := range tx.TxOut {
for j, ascr := range aPKscripts {
// detect p2wpkh
witBool := false
if bytes.Equal(out.PkScript, wPKscripts[j]) {
witBool = true
}
if bytes.Equal(out.PkScript, ascr) || witBool { // new utxo found
var newu Utxo // create new utxo and copy into it
newu.AtHeight = height
newu.KeyIdx = ts.Adrs[j].KeyIdx
newu.Value = out.Value
newu.IsWit = witBool // copy witness version from pkscript
var newop wire.OutPoint
newop.Hash = cachedSha
newop.Index = uint32(i)
newu.Op = newop
b, err := newu.ToBytes()
if err != nil {
return hits, err
}
nUtxoBytes = append(nUtxoBytes, b)
hits++
break // txos can match only 1 script
}
}
}
err = ts.StateDB.Update(func(btx *bolt.Tx) error {
// get all 4 buckets
duf := btx.Bucket(BKTUtxos)
// sta := btx.Bucket(BKTState)
old := btx.Bucket(BKTStxos)
txns := btx.Bucket(BKTTxns)
if duf == nil || old == nil || txns == nil {
return fmt.Errorf("error: db not initialized")
}
// iterate through duffel bag and look for matches
// this makes us lose money, which is regrettable, but we need to know.
for _, nOP := range spentOPs {
v := duf.Get(nOP)
if v != nil {
hits++
// do all this just to figure out value we lost
x := make([]byte, len(nOP)+len(v))
copy(x, nOP)
copy(x[len(nOP):], v)
lostTxo, err := UtxoFromBytes(x)
if err != nil {
return err
}
// after marking for deletion, save stxo to old bucket
var st Stxo // generate spent txo
st.Utxo = lostTxo // assign outpoint
st.SpendHeight = height // spent at height
st.SpendTxid = cachedSha // spent by txid
stxb, err := st.ToBytes() // serialize
if err != nil {
return err
}
err = old.Put(nOP, stxb) // write nOP:v outpoint:stxo bytes
if err != nil {
return err
}
err = duf.Delete(nOP)
if err != nil {
return err
}
}
}
// done losing utxos, next gain utxos
// next add all new utxos to db, this is quick as the work is above
for _, ub := range nUtxoBytes {
err = duf.Put(ub[:36], ub[36:])
if err != nil {
return err
}
}
// if hits is nonzero it's a relevant tx and we should store it
var buf bytes.Buffer
tx.Serialize(&buf)
err = txns.Put(cachedSha.Bytes(), buf.Bytes())
if err != nil {
return err
}
return nil
})
return hits, err
}