Oh my Satoshi
"Bitcoin is not a list of cryptographic features, it’s a very complex system of interacting mathematics and protocols in pursuit of what was a very unpopular goal. While the security technology is very far from trivial, the
why was by far the biggest stumbling block – nearly everybody who heard the general idea thought it was a very bad idea. Myself, Wei Dai, and Hal Finney were the only people I know of who liked the idea (or in Dai’s case his related idea) enough to pursue it to any significant extent until Nakamoto (assuming Nakamoto is not really Finney or Dai). Only Finney (RPOW) and Nakamoto were motivated enough to actually implement such a scheme."
Re: Bitcoin P2P e-cash paper
Mon, 17 Nov 2008
I believe I’ve worked through all those little details over the last year and a half while coding it, and there were a lot of them.
The functional details are not covered in the paper, but the sourcecode is coming soon.
I sent you the main files.
(available by request at the moment, full release soon)
Re: Bitcoin P2P e-cash paper
Fri, 14 Nov 2008 14:29:22 -0800
Hal Finney wrote: > I think it is necessary that nodes keep a separate > pending-transaction list associated with each candidate chain. > ... One might also ask ... how many candidate chains must > a given node keep track of at one time, on average?
Fortunately, it's only necessary to keep a pending-transaction pool for the current best branch. When a new block arrives for the best branch, ConnectBlock removes the block's transactions from the pending-tx pool. If a different branch becomes longer, it calls DisconnectBlock on the main branch down to the fork, returning the block transactions to the pending-tx pool, and calls ConnectBlock on the new branch, sopping back up any transactions that were in both branches. It's expected that reorgs like this would be rare and shallow.
With this optimisation, candidate branches are not really any burden. They just sit on the disk and don't require attention unless they ever become the main chain.
> Or as James raised earlier, if the network broadcast > is reliable but depends on a potentially slow flooding > algorithm, how does that impact performance?
Broadcasts will probably be almost completely reliable. TCP transmissions are rarely ever dropped these days, and the broadcast protocol has a retry mechanism to get the data from other nodes after a while. If broadcasts turn out to be slower in practice than expected, the target time between blocks may have to be increased to avoid wasting resources. We want blocks to usually propagate in much less time than it takes to generate them, otherwise nodes would spend too much time working on obsolete blocks.
I'm planning to run an automated test with computers randomly sending payments to each other and randomly dropping packets.
> 3. The bitcoin system turns out to be socially useful and valuable, so > that node operators feel that they are making a beneficial contribution > to the world by their efforts (similar to the various "@Home" compute > projects where people volunteer their compute resources for good causes). > > In this case it seems to me that simple altruism can suffice to keep the > network running properly.
It's very attractive to the libertarian viewpoint if we can explain it properly. I'm better with code than with words though.
The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Boltzmann Brains, consciousness and the arrow of time
published by Hal Finney 31/12/2008
Sometimes we consider here the nature of consciousness, whether observer moments need to be linked to one another, the role of causality in consciousness, etc. I thought of an interesting puzzle about Boltzmann Brains which offers a new twist to these questions. As most readers are aware, Boltzmann Brains relate to an idea of Boltzmann on how to explain the arrow of time. The laws of physics seem to be time symmetric, yet the universe is grossly asymmetric in time. Boltzmann proposed that if you had a universe in a maximum entropy state, say a uniform gas, then given enough time, the gas would undergo fluctuations to regions of lower entropy. Sometimes, purely at random, clumps of molecules would happen to form. Even more rarely, these clumps might be large and ordered. Given infinite time, one could even have an entire visible-universe worth of matter clump together in an ordered fashion, from which state it would then decay into higher entropy conditions. Life could evolve during this decay, observe the universe around it, and find itself in conditions much like our own.
The Boltzmann Brain is a counter-argument, suggesting that the universe and everything else is redundant; all you need is a brain to form via a spontaneous random fluctuation, and to hold together long enough to engage in a few moments of conscious thought. Such a Boltzmann Brain is far more likely to form than an entire universe, hence the vast majority of conscious thoughts in such a model will be in Boltzmann Brains and not in brains in large universes. If we were tempted to explain the arrow of time in this way, we must accept that the universe is an illusion and that we are actually Boltzmann Brains, a conclusion which most people don't like.
Now this scenario can be criticized in many ways, but I want to emphasize a couple of points which aren't always appreciated. The first is that the Boltzmann scenario, whether a whole universe or just a Brain is forming, is basically time symmetric. That means that if you saw a movie of a Boltzmann universe forming and then decaying back to random entropy, you would not be able to tell which way the movie was running, if it were to be reversed. (This is an unavoidable consequence of the time symmetry of the underlying physics.) It follows that while the universe is moving into the low-entropy state, it must be evolving backwards. That is, an observer from outside would see time appearing to run backwards. Eggs would un-scramble themselves, objects would fall upwards from the ground, ripples would converge on spots in lakes from which rocks would then leap from the water, and so on.
At some point this time reversal effect would stop, and the universe would then proceed to evolve back into a high entropy state, now with time going "forwards". Now, the forward phase will not in general be an exact mirror image of the reverse, because of slight random fluctuations and the like, but it will be an alternate path that essentially starts with the same initial conditions. So we will see one path backwards into the minimum-entropy state, and another path forwards from that state. Both paths are fully plausible histories and neither is distinguishable from the other as far as which was reversed and which was forward, if you ran a recording of the whole process backwards.
One might ask, what causes time to run backwards during the first half of the Boltzmann scenario? The answer is, nothing but very, very odd luck. Time is no more likely to continue to run backwards, or to run backwards the same everywhere in the local fluctuation-area, than it is to start running backwards right now in the universe around you. Nothing stops eggs from unscrambling themselves except the unlikelihood, and the same principle is at work during the Boltzmann time-reversal phase. It is merely that we select, out of the infinity of time, those rare occasions where time does in fact "happen to happen" like this, that allows us to discuss it.
I want to emphasize that this picture of how Boltzmann fluctuations would work is a consquence of the laws of thermodynamics, and time symmetry. Sometimes people imagine that the fluctuation into the Boltzmann low-entropy state is fundamentally different from the fluctuation out of it. They accept that the fluctuation out will be similar to our own existence, with complex events happening. But they imagine that the fluctuation into low entropy might be much simpler, molecules simply aggregating together into some convenient state from which the complex fluctuation out and back to chaos can begin. While this is not impossible and hence will happen occasionally among the infinity of fluctuations in the Boltzmann universe, it will be rare. It will be no more common for a "simple" fluctation-in process to occur than for a simple fluctuation-out process. In our universe, knowing it will evolve to a chaotic heat death, we might imagine that molecules would just fly apart into chaos, but we know that is highly unlikely. Instead, by far the most likely path is a complex one, full of turbulence and reactions and similar activity. By time symmetry, exactly the same arguments apply during the fluctation-in phase. The vast majority of Boltzmann fluctuations that achieve a particular degree of low entropy will do so via complex, turbulent paths which if viewed in reverse will appear to be perfectly plausible sequences of events for a universe which is decaying from order to disorder, like our own.
Following on to this, let us consider the nature of consciousness during these Boltzmann excursions. Again let us focus on larger scale ones than just Boltzmann Brains, although the same principles apply there. During the time reversal phase, if conscious entities are present, their brains are running backwards. They are talking backwards, walking backwards, doing everything in reverse. They remember things that are coming in the future, and forget everything as soon as it has happened.
The question is, is there any difference in consciousness during the reverse and forward phases? Consider that during the forward phase, we started with a low entropy state, and now the laws of physics are playing out just as they do in our own universe. Everything is happening for a reason, depending on what has happened before. Events cause memories to appear in brains by virtue of the same causal effects which give rise to our own memories. Hence I imagine that most would agree that brains during the forward phase are conscious.
However, during the reverse phase, things are quite different. Brains have memories of things that haven't happened yet. Again, one might ask how this can be. The reason is because we stop paying attention to fluctuations where this doesn't happen. We only focus on Boltzmann fluctuations which take the universe into a plausible and consistent low-entropy state, one from which things can evolve in a way that is similar to what we see. When a brain remembers something, if that doesn't happen, the fluctuation is inconsistent. We skip over that one and look for one that is consistent.
In the consistent fluctuations, brain memories turn out to be correct, purely by luck. Similarly, every internal function of the brain which we might attribute to macroscopic-type causality, like neuron A firing because neuron B fired, will happen instead by luck, with neuron A firing as though neuron B is going to fire, and then neuron B just happening to fire in precisely the anticipated way.
The point is that during the time-reversal phase, causality as we normally think of it is absent. Subjectively-past events do not cause subjectively-future ones; rather, subjectively-future events take place before subjectively-past events, and it is merely through luck that things happen in a consistent pattern. Again, if we hadn't gotten lucky so that things work out, we wouldn't have called this a Boltzmann fluctuation of the kind we are interested in (Boltzmann Brain or Boltzmann Universe). By paying selective attention to only those fluctuations where things work, we will only observe cases where luck, rather than causality, makes things happen.
But things do happen, in the same pattern they would if causality were active. So the question is, are brains conscious during this time? Do the thoughts that occur during the time reversal (which recall is not exactly the same as what happens during the forward-time phase) have the same level of subjective reality as thoughts which occur when time runs forward?
We can argue it either way. In favor of consciousness, the main argument is that time is fundamentally symmetric (we assume). Hence there is no fundamental or inherent difference between the forward and reverse phases. The only differences are relative, with the arrow of time pointing in opposite directions in the two phases. But within each phase, we see events which can both be equally well described as leading to consciousness, and therefore conscious experiences will occur in both phases.
On the other side, many people see a role for causality in the creation or manifestation of consciousness. And arguably, causality is different in the two phases. In the forward phase (the part where we are returning from a low-entropy excursion to the high-entropy static state), events follow one another for the usual reasons, and it is correct to attribute a role for causality just as we do in our own experience. But in the reverse phase, it is purely by luck that things happen in a consistent way, and only because we have an infinity of time to work with that we are able to find sequences of events that look consistent even they arose by simple happenstance. There is no true causality in this phase, just a random sequence of events where we have selected a sequence that mimics causality. And to the extent that consciousness depends on causality, we should not say that brains during this reverse phase are conscious.
I lean towards the first interpretation, for the following reason. If consciousness really was able to somehow distinguish the forward from reverse phases in a Boltzmann fluctuation, it would be quite remarkable. Given that the fundamental laws of physics are time symmetric, nothing should be able to do that, to deduce a true "implicit" arrow of time that goes beyond the superficial arrow of time caused by entropy differences. The whole point of time symmetry, the very definition, is that there should be no such implicit arrow of time. This suggestion would seem to give consciousness a power that it should not have, allow it to do something that is impossible.
And if the first interpretation is correct, it seems to call into question the very nature of causality, and its posible role in consciousness. If we are forced to attribute consciousness to sequences of events which occur purely by luck, then causality can't play a significant role. This is the rather surprising conclusion which I reached from these musings on Boltzmann Brains.
Oct. 31, 2008
Someone using the name Satoshi Nakamoto makes an announcement on The Cryptography Mailing list at metzdowd.com: "I've been working on a new electronic cash system that's fully peer-to-peer, with no trusted third party. The paper is available at http://www.bitcoin.org/bitcoin.pdf
This link leads to the now-famous white paper published on bitcoin.org entitled "Bitcoin: A Peer-to-Peer Electronic Cash System." This paper detailed methods of using a peer-to-peer network to generate what was described as "a system for electronic transactions without relying on trust". This paper would become the Magna Carta for how Bitcoin operates today.
Bitcoin P2P e-cash paper
Satoshi Nakamoto satoshi at vistomail.com
Fri Oct 31 14:10:00 EDT 2008
I've been working on a new electronic cash system that's fully
peer-to-peer, with no trusted third party.
The paper is available at:
The main properties:
- Double-spending is prevented with a peer-to-peer network.
- No mint or other trusted parties.
- Participants can be anonymous.
- New coins are made from Hashcash style proof-of-work.
- The proof-of-work for new coin generation also powers the network to prevent double-spending.
Bitcoin: A Peer-to-Peer Electronic Cash System
Abstract. A purely peer-to-peer version of electronic cash would
allow online payments to be sent directly from one party to another
without the burdens of going through a financial institution.
Digital signatures provide part of the solution, but the main
benefits are lost if a trusted party is still required to prevent
double-spending. We propose a solution to the double-spending
problem using a peer-to-peer network. The network timestamps
transactions by hashing them into an ongoing chain of hash-based
proof-of-work, forming a record that cannot be changed without
redoing the proof-of-work. The longest chain not only serves as
proof of the sequence of events witnessed, but proof that it came
from the largest pool of CPU power. As long as honest nodes control
the most CPU power on the network, they can generate the longest
chain and outpace any attackers. The network itself requires
minimal structure. Messages are broadcasted on a best effort basis,
and nodes can leave and rejoin the network at will, accepting the
longest proof-of-work chain as proof of what happened while they
Full paper at:
The Cryptography Mailing List
On 18 August 2008, the domain name bitcoin.org is registered. Today, at least, this domain is "WhoisGuard Protected," meaning the identity of the person who registered it is not public information.
"An electronic signature, in the form of a digital signature, may satisfy the functional requirements of the law of contracts. It must be noted that the signature itself does not afford sufficient proof of the signatory’s identity. Further evidence is required which links the public key (or other method) used by the party. The adducing of additional extrinsic evidence such as is commonly employed when seeking to determine the identity associated with a signature on a manuscript may be used to provide proof"
Craig Wright, 2008
The IT Regulatory & Standards Compliance Handbook: How to Survive an Information Systems Audit & Assessments
Chapter 21, page: 620
Craig has personally conducted in excess of 1,200 IT security related engagements for more than 120 Australian and international organizations in the private and government sectors and now works for BDO Kendall's in Australia. These engagements have comprised of security systems design (including the design of critical network infrastructure), IT audit, systems implementation, staff training and mentoring, cross functional team development, policy and procedural development, business process analysis and digital forensics. In addition to his consulting engagements, Craig has authored numerous IT security related articles and is a co-author of "The Official CHFI Study Guide (Exam 312-49)".
Craig has been involved with designing the architecture for the world's first online casino (Lasseter's Online) in the Northern Territory, designed and managed the implementation of many of the systems that protect the Australian Stock Exchange and also developed and implemented the security policies and procedural practices within Mahindra and Mahindra, India's largest vehicle manufacturer. Craig holds (amongst others) the following industry certifications, CISSP (ISSAP & ISSMP), CISA, CISM, CCE, MCSE, GIAC GNSA, G7799, GWAS, GCFA, GLEG, GSEC, GREM, GPCI and GSPA and has completed numerous degrees in a variety of fields. He is currently completing both a Masters degree in Statistics (at Newcastle) and a Masters Degree in Law (LLM) specialising in International Commercial Law (E-commerce Law).
Craig is planning to start his second doctorate, a PhD in Economics and Law in the digital age in early 2008.
My Latest Plan
Posted by Craig Wright at Monday, August 25, 2008
Unlike most people, I have realised the value of time from when I was a youth. My latest adition of goals is to listen to the 90,000 most influential pieces of music throughout history (as judged by myself).
- Tonight I have Hildegard Von Bingen playing. In this case Canticles Of Ecstasy. This consists of the following works:
- Vis Aeternitatis
- Nunc Aperuit Nobis
- Quia Ergo Femina Mortem Instruxit
- Cum Processit Factura Digiti Dei
- Alma Redemptoris Mater
- Ave Maria, O Auctrix Vite
- Spiritus Sanctus Vivificans Vite
- Ignis Spiritus Paracliti
- Caritas Habundat In Omnia
- Virga Mediatrix
- Viridissima Virga, Ave
- Instrumentalstück Instrumental Piece
- Pastor Aminarum
- Tu Suavissima Virga
- Choruscans Stellarum
- Nobilissima Viriditas
This is a 20 year plan.
Hildegard of Bingen was born in 1084 and at 14 entered a Benedictine nunnery outside of Worms (the Rhineland). She became the Abbess in 1136 and subseqently moved her order to Rupertsberg - outside of Bingen.
She composed 77 vocal works (including 43 Antiphons) collectively known as the Symphonia armonie celestium revelationum.
This is a truely mystic collection of vocal works. A great reflective collection.
On top of this I alsolistened to Symphony No. 8 from Dimitri Shostakovich. This was the 1988 preformance conducted by Yevgeny Mravinsky. This is reflective, bitterly powerful and emotionally transcandent. This is a dark and brooding work reflecting a true depth of emotion and experiances I can not begin to comprehend.
Yet in it lies hope.
George Frideric Handel - Messiah (1742)
CSW 09:43 Feb 6, 2020
BITCOIN WAS NOT EVER LAUNCHED INTO THE CYPHERPUNK COMMUNITY!
There is a Cypherpunk mailing list - Bitcoin was NEVER announced there. I was on that list. - And I did not use it for Bitcoin
The list used was a common general list used originally by MANY in the field - even those in the NSA and DHS.
The Toad Cypherpunk list is not the Cryptography list http://ftp.arnes.si/packages/crypto-tools/cypherpunks/mailing_list/
These are separate.
Bitcoin is a new design for a fully peer-to-peer electronic cash system. A C++ implementation is under development for release as an open source project.
The C++ implementation is released as an Open Source Project - NOT the protocol!
It DOES matter where we are now...
As - YOU ARE USING MY IP - and I maintain those rights.
And, I believe and respect LAW
What is not relevant - Kraken and the drive to be outside the law and profit from Money laundering.
You want 1984 level Cherry picking - That thread https://twitter.com/danheld/status/...
Bitcoin was NOT created for the financial crisis. It was started well before that - AND that shit occured right when I launched Bitcoin.
Again - BITCOIN had NOTHING to do with CypherPunks
Hyena (CryptoGraffiti.info) 13:57
As a software developer I always found it strange that Satoshi managed to deliver Bitcoin right on time for the financial crisis :lächeln: but I didn't turn much attention to this anomaly back then (so I believed the popular narrative).
I always figured it must have been a fairly long process to think out all the relevant parts of Bitcoin. Years. Not months or weeks. So i never swallowed that BS about the financial crisis being the trigger for Bitcoin. Historical anomaly. Circumstance. An idea who’s time had come. (bearbeitet)
It was years
And the international banking crisis really was when the collapse of the investment bank Lehman Brothers occurred. This was on September 15, 2008.
I had already sent copies of the draft paper to multiple people by August 2008 and registered the domain etc well before Sept 2008.
I am NOT omniscient- I did not foresee Lehman Bro's
The system was close to ready in Nov 2008 and was known BEFORE the Sept 2008 collapse.
Hence - BitCoin had NOTHING at all to do with the GFC. It was not about banking. YES, I did not like what Chancellor Darling was saying...He wanted to reverse the reforms Thatcher introduced and nationalise the banks.
I LIKE banks - I just do not like the shit they have been doing - as a result of leftist BS about all people DESERVING a home loan.
Metanet.ico Slack on Feb 6, 2020
WEDNESDAY, 6 AUGUST 2008
What is NON-REPUDIATION
Non-repudiation is the process of ensuring that a parties to a transaction cannot deny (this is repudiate) that a transaction occurred.
Repudiation is an assertion refuting a claim or the refusal to acknowledge an action or deed. Anticipatory repudiation (or anticipatory breach) describes a declaration by the promising party (as associated with a contract) that they intend to fail to meet their contractual obligations.
Posted by Craig Wright at Wednesday, August 06, 2008
AN AUGUST 2008 POST ON WRIGHT’S BLOG, MONTHS BEFORE THE NOVEMBER 2008 INTRODUCTION OF THE BITCOIN WHITEPAPER ON A CRYPTOGRAPHY MAILING LIST. IT MENTIONS HIS INTENTION TO RELEASE A “CRYPTOCURRENCY PAPER,” AND REFERENCES “TRIPLE ENTRY ACCOUNTING,” THE TITLE OF A 2005 PAPER BY FINANCIAL CRYPTOGRAPHER IAN GRIGG THAT OUTLINES SEVERAL BITCOIN-LIKE IDEAS.
A POST ON THE SAME BLOG FROM NOVEMBER, 2008. IT INCLUDES A REQUEST THAT READERS WHO WANT TO GET IN TOUCH ENCRYPT THEIR MESSAGES TO HIM USING A PGP PUBLIC KEY APPARENTLY LINKED TO SATOSHI NAKAMOTO. A PGP KEY IS A UNIQUE STRING OF CHARACTERS THAT ALLOWS A USER OF THAT ENCRYPTION SOFTWARE TO RECEIVE ENCRYPTED MESSAGES. THIS ONE, WHEN CHECKED AGAINST THE DATABASE OF THE MIT SERVER WHERE IT WAS STORED, IS ASSOCIATED WITH THE EMAIL ADDRESS SATOSHIN@VISTOMAIL.COM, AN EMAIL ADDRESS VERY SIMILAR TO THE SATOSHI@VISTOMAIL.COM ADDRESS NAKAMOTO USED TO SEND THE WHITEPAPER INTRODUCING BITCOIN TO A CRYPTOGRAPHY MAILING LIST.
by Phillip James Wilson aka Scronty
Table of Contents
(self.Bitcoin) submitted April? 2017 * by Scronty
Afternoon, All. Today marks the eighth anniversary of the publication of the Bitcoin white paper. As a special tribute, I will provide you with a short story on the origins of the Bitcoin tech. I've been out of the game for many years, however now I find myself drawn back - in part due to the energy that's being added by the incumbents, in part due to information that's become public over the past year. I haven't followed the Bitcoin and alt coin tech for the past five or six years. I left about six months before (2). My last communication with (2) was five years ago which ended in my obliteration of all development emails and long-term exile. Every mention of Bitcoin made me turn the page, change the channel, click away - due to a painful knot of fear in my belly at the very mention of the tech. As my old memories come back I'm jotting them down so that a roughly decent book on the original Bitcoin development may be created. The following are a few of these notes. This is still in early draft form so expect the layout and flow to be cleaned up over time. Also be aware that the initial release of the Bitcoin white paper and code was what we had cut down to from earlier ideas.
From Wikipedia, the free encyclopedia
Tominaga Nakamoto (富永 仲基 Tominaga Nakamoto, 1715–1746) was a Japanese philosopher. He was educated at the Kaitokudō academy founded by members of the mercantile class of Osaka, but was ostracised shortly after the age of 15. Tominaga belonged to a Japanese rationalist school of thought and advocated a Japanese variation of atheism, mukishinron (no gods or demons). He was also a merchant in Osaka. Only a few of his works survive; his Setsuhei ("Discussions on Error") has been lost and may have been the reason for his separation from the Kaitokudō, and around nine other works' titles are known. The surviving works are his Okina no Fumi ("The Writings of an Old Man"), Shutsujō Kōgo ("Words after Enlightenment"; on textual criticism of Buddhist sutras), and three other works on ancient musical scales, ancient measurements, and poetry.
He took a deep critical stance against normative systems of thought, partially based on the Kaitokudō's emphasis on objectivity, but was clearly heterodox in eschewing the dominant philosophies of the institution. He was critical of Buddhism, Confucianism and Shintoism. Whereas each of these traditions drew on history as a source of authority, Tominaga saw appeals to history as a pseudo-justification for innovations that try to outdo other sects vying for power. For example, he cited the various Confucian Masters who saw human nature as partially good, neither good nor bad, all good, and inherently bad; analysing later interpreters who tried to incorporate and reconcile all Masters. He criticised Shintoism as obscurantist, especially in its habit of secret instruction. As he always said, "hiding is the beginning of lying and stealing". In his study of Buddhist scriptures, he asserted that Hinayana school of scriptures preceded Mahayana scriptures but also asserted that the vast majority of Hinayana scriptures are also composed much later than the life of Gautama Buddha, the position which was later supported by modern scriptural studies.
"Even though (2) and (3) weren't as high in the crypto world and as knowledgable as the folks I wanted to interact with, they had factors which placed them far above any of these others.
They were driven.
March 12th 2008
I need your help editing a paper I am going to release later this year. I have been working on a new form of electronic money. Bit cash, Bitcoin...
You are always there for me Dave. I want you to be part of it all.
I cannot release it as me. GMX, Vistomail and Tor. I need your help and I need a version of me to make this work that is better than me.
W. Dai, "b-money"
From: "Satoshi Nakamoto" <firstname.lastname@example.org>
Sent: Friday, August 22, 2008 4:38 PM
To: "Wei Dai" <email@example.com>
Cc: "Satoshi Nakamoto" <firstname.lastname@example.org>
Subject: Citation of your b-money page
I was very interested to read your b-money page. I'm getting ready to release a paper that expands on your ideas into a complete working system. Adam Back (hashcash.org) noticed the similarities and pointed me to your
I need to find out the year of publication of your b-money page for the citation in my paper. It'll look like:
 W. Dai, "b-money," http://www.weidai.com/bmoney.txt, (2006?).
You can download a pre-release draft at
http://www.upload.ae/file/6157/ecash-pdf.html Feel free to forward it to anyone else you think would be interested.
Title: Electronic Cash Without a Trusted Third Party
Abstract: A purely peer-to-peer version of electronic cash would allow
online payments to be sent directly from one party to another without the
burdens of going through a financial institution. Digital signatures
offer part of the solution, but the main benefits are lost if a trusted
party is still required to prevent double-spending. We propose a solution
to the double-spending problem using a peer-to-peer network. The network
timestamps transactions by hashing them into an ongoing chain of
hash-based proof-of-work, forming a record that cannot be changed without
redoing the proof-of-work. The longest chain not only serves as proof of
the sequence of events witnessed, but proof that it came from the largest
pool of CPU power. As long as honest nodes control the most CPU power on
the network, they can generate the longest chain and outpace any
attackers. The network itself requires minimal structure. Messages are
broadcasted on a best effort basis, and nodes can leave and rejoin the
network at will, accepting the longest proof-of-work chain as proof of
what happened while they were gone.
RE: Defamation and the diffculties of law on the Internet.
From: "dave kleiman" <dave () davekleiman com>
Date: Wed, 12 Mar 2008 03:25:13 -0400
Hats off to you Craig,
Sometimes you amaze me....I literally today just took on a case today
dealing exactly with this, you are making my life easy as I am gathering
(with your permission) this information you have provided for my client's
When this becomes public record, I will post-up the results.
I will take any more information on this subject with great enthusiasm and
appreciation, as always!!!
By the way, for those of you who have never asked for Craig's help, you do
not know what you are missing.
I have asked his research assistance more than once. One particular time it
was dealing with abilities of cookies on the server side, when I awoke the
next morning, I had 100's of pages and links of information on that subject,
and variations and ideas I had not even or forgotten to consider (e.g. web
bugs). Why did he help, for no other than reason than he just likes to
research information, and possibly considers me a friend from afar. He
probably had as much fun reading up on the subject as did.
And along with all the technical details he included this:
Cookie Recipe Ingredients:
125 grams butter
50 grams caster sugar
60 grams brown sugar
1 large egg
1 teaspoon vanilla essence/extract
125 grams of plain flour
1/2 teaspoon salt
1/2 teaspoon bicarbonate of soda
250 grams of chocolate (Dark is best for this)
1/2 cup coarsely chopped almonds
Method: Turn your oven on to preheat at 180 degrees Celsius (about 350
degrees Fahrenheit, gas mark 4). Remember to take your grill pan out first.
Now get some baking trays ready (if they're not non-stick then you better
line them or grease them)......................Hey Presto! The world's BEST
cookies, in the comfort of your own home.
In the midst of this data exchange I casually mentioned that one day, when I
was in the position to not have to work so much, I would return to school
and my dream degrees in Cosmology and Astrophysics. Of course, the next day
I had links to every online study available for those degrees, with a "why
Further, it amazes me how Craig has a Blog helping to understand the rights
of US based Digital Forensic Examiners:
And he is based in AU. He simply cares enough about the cause and the
industry to help, it has no direct affect on him if US DFEs are required to
have PI licenses!!
People of the past considered "Loons":
(Feynman, Hawking, Sagan, da Vinci, Einstein, Columbus, everyone associated
with Monty Python and the Holy Grail:
Black Knight: Right, I'll do you for that!
King Arthur: You'll what?
Black Knight: Come here!
King Arthur: What are you gonna do, bleed on me?
Black Knight: I'm invincible!
King Arthur: ...You're a loony.
.......you get the picture)
Yep Craig is a Junkie; a Knowledge Junkie!!!!
For those of you who have nothing good to say; why say anything?
Dave Kleiman - http://www.davekleiman.com
4371 Northlake Blvd #314
Palm Beach Gardens, FL 33410
From: listbounce () securityfocus com
[<a href="mailto:listbounce" rel="nofollow">mailto:listbounce</a> () securityfocus com] On Behalf Of Craig Wright
Sent: Tuesday, March 11, 2008 17:15
To: 'Simphiwe Mngadi'; security-basics () securityfocus com
Subject: RE: Defamation and the diffculties of law on the Internet.
SANS had "Police Decline to Intervene in Libellous Bebo Page Case
(March 7 & 8, 2008" in Newsbytes Vol 10.20.
This refers to:
<a href="http://technology.timesonline.co.uk/tol/news/tech_and_web/the_web/ar" rel="nofollow">http://technology.timesonline.co.uk/tol/news/tech_and_web/the_web/ar</a>
<a href="http://www.dailyrecord.co.uk/news/newsfeed/2008/03/07/web-of-lies-" rel="nofollow">http://www.dailyrecord.co.uk/news/newsfeed/2008/03/07/web-of-lies-</a>
<a href="http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2008/03/07/nbeb" rel="nofollow">http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2008/03/07/nbeb</a>
Actually, content control IS an aspect of security and compliance. I
may have been a little angry when writing, but I am far from
I have taken and updated a little something for the list based on
responses I have received over the years. Liability against an
Intermediary, whether in the traditional view of ISP and ICP as well
as that of employers and other parties remains a risk.
Extrusion filters seem to be something that is not considered, not
by most organisations and not unfortunately by many of the list.
There is more than filtering for attacks. This is surprising as many
standards and regulations require that specific information is
filtered. PCI-DSS, HIPAA and a raft of legislation specifies that
organisation setup the capability to monitor both incoming and
outgoing traffic. This is not port based, but rather a capability to
monitor and filter (or at the least act on) content.
I oversee the information gathering for many more companies than I
actually audit myself (being an audit manager for an external audit
firm). In 1,412 firms I have been to or reviewed information for, I
have collected a number of statistics over the years.
231 (or 16.4%) have some content management
184 (13.0%) have NO egress filters - Nothing at all. No
734 (52.0% have a disclaimer on email that is barely
210 (14.8% have a legally valid privacy
policy/disclaimer on their web sites)
15 (1.06% check Google or other places for information
on their references)
In Scheff v Bock (Susan Scheff and Parents Universal Experts, Inc.
v. Carey Bock - Florida USA, 2006, Case No. CACE03022837) a Florida
jury awarded Sue Scheff US$11.3 million costs and damages over
recurrent blog postings. A former acquaintance accused her of being
a crook, a con artist and a fraudster (as a side note the same laws
apply in Au).
See <a href="http://www.citmedialaw.org/threats/scheff-v-bock" rel="nofollow">http://www.citmedialaw.org/threats/scheff-v-bock</a>
In principle, defamation consists of a false and unprivileged
statement of fact that is harmful to the reputation o f another
person which is published "with fault". That is means that it is
published as a result of negligence or malice. Different laws define
defamation in specific ways that differ slightly, but the gist of
the matter is the same. Libel is a written defamation; slander is a
Libellous (when false):
Charging someone with being a communist (in 1959)
Calling an attorney a "crook"
Describing a woman as a call girl
Accusing a minister of unethical conduct
Accusing a father of violating the confidence of son
Calling a political foe a "thief" and "liar" in chance
encounter (because hyperbole in context)
Calling a TV show participant a "local loser," "chicken
butt" and "big skank"
Calling someone a "bitch" or a "son of a bitch"
Changing product code name from "Carl Sagan" to "Butt Head
See <a href="http://w2.eff.org/bloggers/lg/faq-defamation.php" rel="nofollow">http://w2.eff.org/bloggers/lg/faq-defamation.php</a> for details.
So let us do the Math. Let us take a case of 0.1% (or 1 in a
thousand) employees (and the number is in reality higher then this)
posting from their place of work a defamatory post. 83.6% of
companies (based on figures above) will not detect or stop anything.
Less check at all.
Let us take an average US litigation cost for defamation of $182,500
(taking cases won from 96 to current in Au, UK and US) Also see
"Rethinking Defamation" by DAVID A. ANDERSON of the University of
Texas at Austin - School of Law.
(<a href="http://papers.ssrn.com/sol3/papers.cfm?abstract_id=976116#PaperDown" rel="nofollow">http://papers.ssrn.com/sol3/papers.cfm?abstract_id=976116#PaperDown</a>
So if we take a decent sized company of 5,000 employees, we have an
expectation of 4 incidents per annum that in coming years would be
expected to make it to court. Employers are vicariously liable for
many of these actions. In the past, employers and ICP's have not
been targeted, but this is changing. The person doing the act is
generally not one with the funds to pay out the losses. The employer
is. Thus the ability to co-join employers will increase these types
Facebook, blogs and other accesses will only make this worse in
So what does this mean? Well in the case of our hypothetical
employer, there is an expected annualised loss of $788,400 US in
coming years. The maximum expected payout would be $50,000,000 US.
It is unlikely that the individual making the claim will be able to
pay the cost of losing, so the employer will more and more be added
to be suit.
Now, I am in no way affiliated with ANY content management software,
but I see this as a necessary evil. This would could as an effective
corporate governance strategy, lowering the potential liability of
In my experience, the costs of the software and the management are
going to add to less then the potential. With the recent win in
Scheff v Bock, this is only going to increase.
The conduct of both agents and employees can result in situations
where liability is imposed vicariously on an organisation through
both the common law[i] and by statute.[ii] The benchmark used to
test for vicarious liability for an employee requires that the deed
of the employee must have been committed during the course and
capacity of their employment under the doctrine respondeat superior.
Principals' liability will transpire when a `principal-agent'
relationship exists. Dal Pont[iii] recognises three possible
categories of agents:
(a) those that can create legal relations on behalf of a principal
with a third party;
(b) those that can affect legal relations on behalf of a principal
with a third party; and
(c) a person who has authority to act on behalf of a principal.
Despite the fact that a party is in an agency relationship, the
principal is liable directly as principal as contrasting to
vicariously, "this distinction has been treated as of little
practical significance by the case law, being evident from judges'
reference to principals as vicariously liable for their agents'
acts"[iv]. The consequence being that an agency arrangement will
leave the principle directly liable rather then liable vicariously.
The requirement for employees of "within the scope of employment" is
a broad term without a definitive definition in the law, but whose
principles have been set through case law and include:
where an employer authorises an act but it is performed using an
inappropriate or unauthorised approach, the employer shall remain
the fact that an employee is not permitted to execute an action is
not applicable or a defence[vi]; and the mere reality that a deed is
illegal does not exclude it from the scope of employment[vii].
Unauthorised access violations or computer fraud by an employee or
agent would be deemed remote from the employee's scope of employment
or the agent's duty. This alone does not respectively absolve the
employer or agent from the effects of vicarious liability[viii].
Similarly, it remains unnecessary to respond to a claim against an
employer through asserting that the wrong committed by the employee
was for their own benefit. This matter was authoritatively settled
in the Lloyd v Grace, Smith and Co.[ix], in which a solicitor was
held liable for the fraud of his clerk, albeit the fraud was
exclusively for the clerk's individual advantage. It was declared
that "the loss occasioned by the fault of a third person in such
circumstances ought to fall upon the one of the two parties who
clothed that third person as agent with the authority by which he
was enabled to commit the fraud"[x]. Lloyd v Grace, Smith and
Co.[xi] was also referred to by Dixon J in the leading Australian
High Court case, Deatons Pty Ltd v Flew[xii]. The case concerned an
assault by the appellant's barmaid who hurled a beer glass at a
patron. Dixon J stated that a servant's deliberate unlawful act may
invite liability for their master in situations where "they are acts
to which the ostensible performance of his master's work gives
occasion or which are committed under cover of the authority the
servant is held out as possessing or of the position in which he is
placed as a representative of his master"[xiii].
Through this authority, it is generally accepted that if an employee
commits fraud or misuses a computer system to conduct an illicit
action that results in damage being caused to a third party, the
employer may be supposed liable for their conduct. In the case of
the principles agent, the principle is deemed to be directly liable.
In the context of the Internet, the scope in which a party may be
liable is wide indeed. A staff member or even a consultant (as an
agent) who publishes prohibited or proscribed material on websites
and blogs, changes systems or even data and attacks the site of
another party and many other actions could leave an organisation
liable. Stevenson Jordan Harrison v McDonnell Evans (1952)[xiv]
provides an example of this type of action. This case hinged on
whether the defendant (the employer) was able to be held liable
under the principles of vicarious liability for the publication of
assorted "trade secrets" by one of its employees which was an
infringement of copyright. The employee did not work solely for the
employer. Consequently, the question arose as to sufficiency of the
"master-servant" affiliation between the parties for the conditions
of be vicarious liability to be met. The issue in the conventional
"control test" as to whether the employee was engaged under a
"contract for services", against a "contract of service" was
substituted in these circumstances with a test of whether the tort-
feasor was executing functions that were an "integral part of the
business" or "merely ancillary to the business". In the former
circumstances, vicarious liability would extend to the employer.
Similarly, a contract worker acting as web master for an
organisation who loads trade protected material onto their own blog
without authority is likely to leave the organisation they work for
liable for their actions.
In Meridian Global Funds Management Asia Limited v Securities
Commission[xv], a pair of employees of MGFMA acted without the
knowledge of the company directors but within the extent of their
authority and purchased shares with company funds. The issue lay on
the qualification of whether the company knew, or should have known
that it had purchased the shares. The Privy Council held that
whether by virtue of the employees' tangible or professed authority
as an agent performing within their authority[xvi] or alternatively
as employees performing in the course of their employment[xvii],
both the actions, oversight and knowledge of the employees may well
be ascribed to the company. Consequently, this can introduce the
possibility of liability as joint tort-feasors in the instance where
directors have, on their own behalf, also accepted a level of
responsibility[xviii] meaning that if a director or officer is
explicitly authorised to issue particular classes of representations
for their company, and deceptively issues a representation of that
class to another resulting in a loss, the company will be liable
even if the particular representation was done in an inappropriate
manner to achieve what was in effect authorised.
The degree of authority is an issue of fact and relies appreciably
on more than the fact of employment providing the occasion for the
employee to accomplish the fraud. Panorama Developments (Guildford)
Limited v Fidelis Furnishing Fabrics Limited[xix] involved a company
secretary deceitfully hiring vehicles for personal use without the
managing director's knowledge. As the company secretary will
customarily authorise contracts for the company and would seem to
have the perceptible authority to hire a vehicle, the company was
held to be liable for the employee's actions.
Employers can be held to be either directly or vicariously liable
for the criminal behaviour of their employees.
Direct liability for organisations or companies refers to the class
of liability that occurs when it permits the employee's action. Lord
Reid in Tesco Supermarkets Limited v Nattrass[xx] formulated that
this transpires when someone is "not acting as a servant,
representative, agent or delegate" of the company, but as "an
embodiment of the company"[xxi]. When a company is involved in an
action, this principle usually relates to the conduct of directors
and company officers when those individuals are acting for or "as
the company". Being that directors can assign their
responsibilities, direct liability may encompass those employees who
act under that delegated authority. The employer may be directly
liable for the crime in cases where it may be demonstrated that a
direct act or oversight of the company caused or accepted the
employee's perpetration of the crime.
Where the prosecution of the crime involves substantiation of mens
rea[xxii], the company cannot be found to be vicariously liable for
the act of an employee. The company may still be found vicariously
liable for an offence committed by an employee if the offence does
not need mens rea[xxiii] for its prosecution, or where either
express or implied vicarious liability is produced as a consequence
of statute. Strict liability offences are such actions. In strict
liability offences and those that are established through statute to
apply to companies, the conduct or mental state of an employee is
ascribed to the company while it remains that the employee is
performing within their authority.
The readiness on the part of courts to attribute criminal liability
to a company for the actions of its employees seems to be
escalating. This is demonstrated by the Privy Council decision of
Meridian Global Funds Management Asia Ltd v Securities
Commission[xxiv] mentioned above. This type of fraudulent activity
is only expected to become simpler through the implementation of new
technologies by companies. Further, the attribution of criminal
liability to an organisation in this manner may broaden to include
those actions of employees concerning the abuse of new technologies.
It is worth noting that both the Data Protection Act 1998[xxv] and
the Telecommunications (Lawful Business Practice)(Interception of
Communications) Regulations 2000[xxvi] make it illegal to use
equipment connected to a telecommunications network for the
commission of an offence. The Protection of Children Act 1978[xxvii]
and Criminal Justice Act 1988[xxviii] make it a criminal offence to
distribute or possess scanned, digital or computer-generated
facsimile photographs of a child under 16 that are indecent.
Further, the Obscene Publications Act 1959[xxix] subjects all
computer material making it a criminal offence to publish an article
whose effect, taken as a whole, would tend to deprave and corrupt
those likely to read, see or hear it. While these Acts do not of
themselves create liability, they increase the penalties that a
company can be exposed to if liable for the acts of an employee
committing offences using the Internet.
[i] Broom v Morgan  1 QB 597.
[ii] Employees Liability Act 1991 (NSW).
[iii] G E Dal Pont, Law of Agency (Butterworths, 2001) [1.2].
[iv] Ibid [22.4].
[v] Singapore Broadcasting Association, SBA's Approach to the
Internet, See Century Insurance Co Limited v Northern Ireland Road
Transport Board  1 All ER 491; and Tiger Nominees Pty Limited
v State Pollution Control Commission (1992) 25 NSWLR 715, at 721 per
[vi] Tiger Nominees Pty Limited v State Pollution Control Commission
(1992) 25 NSWLR 715.
[vii] Bugge v Brown (1919) 26 CLR 110, at 117 per Isaacs J.
[viii] unreported decision in Warne and Others v Genex Corporation
Pty Ltd and Others -- BC9603040 -- 4 July 1996.
[ix]  AC 716
[x]  AC 716, Lord Shaw of Dunfermline at 739 [xi]  AC
716 [xii] (1949) 79 CLR 370 at 381 [xiii] Ibid.
[xiv]  1 TLR 101 (CA).
[xv]  2 AC 500
[xvi] see Lloyd v Grace, Smith & Co.  AC 716 [xvii] see
Armagas Limited v Mundogas S.A.  1 AC 717 [xviii] Demott,
Deborah A. (2003) "When is a Principal Charged with an Agent's
Knowledge?" 13 Duke Journal of Comparative & International Law. 291
[xix]  2 QB 711 [xx]  AC 153 [xxi] ibid, at 170 per Lord
Reid [xxii] See Pearks, Gunston & Tee Limited v Ward  2 KB 1,
at 11 per Channell J, and Mousell Bros Limited v London and North-
Western Railway Company  2 KB 836, at 843 per Viscount Reading
[xxiii] See Mousell Bros Limited v London and North-Western Railway
Company  2 KB 836, at 845 per Atkin J.
[xxiv]  2 AC 500.
[xxv] Data Protection Act 1998 [UK]
[xxvi] Telecommunications (Lawful Business Practice)(Interception of
Communications) Regulations 2000 [UK] [xxvii] Protection of Children
Act 1978 [UK] [xxviii] Protection of Children Act 1978 and Criminal
Justice Act 1988 [UK] [xxix] Obscene Publications Act 1959 [UK]
Craig Wright (GSE-Compliance)
Manager of Information Systems
Direct : +61 2 9286 5497
Craig.Wright () bdo com au
+61 417 683 914
BDO Kendalls (NSW)
Level 19, 2 Market Street Sydney NSW 2000
GPO BOX 2551 Sydney NSW 2001
Fax +61 2 9993 9497
<a href="http://www.bdo.com.au/" rel="nofollow">http://www.bdo.com.au/</a>
Craig Wright, Security Hero
April 4th, 2008
By Stephen Northcutt
Craig Wright certainly qualifies as a security hero! He has written articles and books on security and has nearly every SANS and GIAC certificate available (including platinum). He is a GIAC Technical Director, and jack-of-all-trades, master of a few, and all of us at the security laboratory thank him for his time!
Craig, I see that you are qualified in a number of disciplines including having just completed a master’s degree in law. So why did you choose information security?
When I was young, information security didn’t really exist as a career. I started doing some simple programming tasks and moved into a role as a SunOS 4.1 administrator. We had a custom developed database on the system and, at that point, security was generally the least of anyone's concerns. I had been tasked with ensuring that the data on the system remained secure and that the system was available, but there was no budget for security. Back in the days before the Web, Gopher proved a great tool for finding information. What I started learning about back then was just how many vulnerabilities exist.
I got into a little bit of trouble from time to time when I would demonstrate some of the vulnerabilities. This led to a reputation as the guy who could "break into stuff" - something that was both good and bad. When systems needed to be configured I would be consulted, but I also found that I was would be blamed when anything went wrong.
So, of course, when firewalls came about in the mid-90s, I was the one that they where handed to. I stayed in security as it is something that I do well and it allows me to give back to the community.
So, how did you learn about firewalls back then?
Back then it was even more of the "wild wild web" than now. I cringe at some of the things we did. I started by putting together bits and pieces that I'd dug up and basically cobbled together a halfway decent firewall using the firewall Toolkit. Back then code was available, and a lot simpler, so much of the learning process was really playing. This followed when I started working for an ISP. I was basically given a copy of Checkpoint firewall-1 version 2 and expected to know it by the end of the week.
This wasn't as bad as it seemed since having worked with the firewall Toolkit and Gauntlet, I found Checkpoint to be easy.
What about "security cowboys" in the 90s? Back then it seemed that the security methodology was to download some security tool that compiled on a Sun 3, how were those times for you?
In the 90s most of us were, basically, cowboys. Back then, methodologies didn't exist; and if you wanted some level of functionality, you had to make it yourself. Mistakes were a common occurrence, but what really mattered was if you learned from those mistakes. The biggest change for me was taking a role in the Australian Stock Exchange where I managed the firewalls and other security devices. Working in an environment with a six 9's requirement for uptime was a real eye-opener.
More than anything else, the ASX taught me the benefits of a well planned project. I also learnt VMS. The ASX beat the cowboy out of me.
How do you build your skills?
Practice, practice, practice. And, add to that a lot of reading.
Also, since I have to commute, I have used text to audio conversion software and changed papers to MP3 files, so I listen to these as I drive. This takes care of the theory, leaving time to practice the various tools and techniques at home. Add to that a huge amount of training from SANS and others, and an inability to get out of Uni, and that about covers it.
These days, it has become even simpler. I act as an editor for a technical publisher and also author my own papers and books. Getting paid to conduct technical reviews is a great way to stay on top of things.
I noticed that you have an eclectic collection of qualifications. Has this helped your security career, or is it just out of general interest?
I have found that knowledge in a wide range of topics makes it easier to understand the viewpoint other people are coming from. Having studied finance and law has made my role as an auditor easier. I’m sure that many of my clients do not see it this way since I have a habit of pointing out obscure points of law they may not be complying with, but my role as an auditor is to point out risk to management.
I stay sane as I’ve learnt that it is not my problem on how they act to what I’ve pointed out to them. As long as I can ensure that they have an accurate understanding of risk they face, I’ve done my job.
Statistics and data mining skills have helped this. I get told all the time that there are not enough sources of data to be able to create adequate quantitative risk models. This is where I find that a mathematical foundation would help many in the industry. Methods such as longitudinal data analysis provide the means to scientifically model an organisation's risk. The difficulty is that these methodologies do not lend themselves to simple tools and require an analysis focused on the particular organisation.
Do you see security as an art form or science?
In practice it should be progressing towards more of a science than an art. However, very few people treat it this way. Unfortunately, marketing and hype obscures much of what is really important. Many of the simple practices that make a site secure don’t lead to an opportunity to sell services. As such, many of these are ignored.
On top of this, many people have the idea that the only way to test a system is by using a black box format in an attempt to simulate (falsely) what a “hacker” would do. I mean, I am happy to take on organisation's money and spend a day or two doing basic preliminary investigations that any script kiddie can do if they require it of me, but I'm much happier just getting the information from them and saving both of us time and them money. I see far too much hype around the skills related to attacking and breaking into a system and, by far, not enough effort into securing systems. After all, it takes far more skill to properly secure a system than it does to break into one.
So what do you see as the major problem with auditing and compliance?
I have to say the major problem is that people attempt to tick a box rather than fix a problem. Often, more effort is put into avoiding fixing a vulnerability or other issue than would be taken to correct it.
Another problem is that the industry is really geared away from fixing the problems. We seem to do our best to avoid confronting clients with the risk that they actually face. Many people say that compliance regimes such as SOX do little to secure a system. The truth is that this is not related at all to the compliance regime but to the general avoidance of them. As a case in point, we have been engaged to re-perform tests for SOX clients that are unrelated to the security of the system. On instructing the client that the controls they have implemented will not make them compliant with the requirements of SOX, we have been instructed to simply rerun the test of the controls in place.
So, it is not to say that SOX does not lead to a secure system, but rather people do their best to avoid it. In other cases, I have seen companies create their own stored procedures on a database to obscure data fields so that they can pass a PCI audit. The auditor is never given enough time to test all the systems, so hiding what is actually occurring is an easy way to become “compliant”. The silly thing is that, in many instances, the amount of effort to hide non-compliance is far greater than what would be required to make the system compliant.
So why do organizations try to avoid securing systems in your view? It certainly seems like there are two basic keys to information assurance, configuring systems correctly and detecting when the configuration fails. Yet, proper configuration does not seem to get much emphasis.
There seems to be a lack of knowledge and understanding about security that has not disappeared over the years and, if anything, has gotten worse. As security professionals, we have to take a lot of the blame. Many of us spend our time bickering over obscure issues and things that don’t really matter. We really need to step back and take a risk-based approach. Some training in economics and finance would be a great benefit to many people in the industry.
I certainly hear you about the bickering over obscure issues; I love Schneier's point in Beyond Fear, we tend to love Security Theater. What benefit do you see that studying economics and finance would offer the average Security professional?
We might be able to start having a risk-based approach. At the moment, too many of the issues in security come down to personal preferences. We really need to stand back and look at the true cost. Rather than installing that nice new toy with its six-figure price tag, maybe a little bit of time looking through and testing a few configuration standards (such as those from SANS and CISecurity.org) would benefit.
So, where do you see yourself in the future?
Ideally, I want to move into a technical research role. In my ideal position I would be either CTO and security evangelist or lab director. At the moment, I conduct research in my own time. The ideal would be having someone pay me for doing what is essentially my hobby.
WEDNESDAY, 30 JANUARY 2008
Trusting electronically signed documents.
Both electronic and paper documents are subject to tampering. The discovery of collisions has demonstrated that the process of signing a hash signature is not without its own vulnerabilities. In fact, the collision allows two versions of the document to be created with the same hash and thus same electronic signature.
It was stated in a response to an earlier post that “Electronic contracts do not have to be re-read when they are returned because there's generally no mechanism (unless it's built into the electronic process) to alter the contract terms, scratch out a line, insert text, etc. What you send is what is being signed.”
Unfortunately this is not true.
An attacker could generate two documents. One states:
Sell at $500,000.00 (Order 1)
The second document states:
Sell at $1,000,000.00 (Order 2)
Our attacker wants to have the second document as the one that is signed. By doing this they have increased the sale contract by $500,000.
Confoo is a tool that has been used to demonstrate two web pages that look different, but have the same MD5 hash (and there are also issues with other hash algorithms as well).
Digital signatures typically work using public key crypto. The document is signed using the private key of the signer. The public key is used for verification of the signature. The issue is that public key crypto is slow. So rather then signing the entire document, a hash of the document is signed. As long as the hash is trusted, the document is trusted. The concern is that collisions exist.
So back to the issue. Our attacker takes order 1 and order 2 and uses the Confoo techniques (also have a look at Stripwire).
The client is sent a document that reads as “order 1” and they agree to buy a product for $500,000. As such they sign the order using an MD5 hash that is encrypted with the buyers private key. Our attacker (using Confoo style techniques) has set up a document with a collision. Order 1 and Order 2 both have the same hash.
Our attacker can substitute the orders and the signed document (that is a verified hash) will still verify as being signed.
The ability of Microsoft Word to run macros and code makes it a relatively simple attack to create a collision in this manner.
So, electronic documents do need to be re-read – but it is simpler in that there are tools to verify these. Ensure that the Hash used is trusted and even use multiple hashes together.
This attack works due to the nature of hashing algorithms. If you have 2 documents, x and y that have the same hash (i.e. a collision) then by appending an additional block of information – q to the documents will also result in a collision. This is (x+q) will have the same hash as (y+q).
Posted by Craig Wright at Wednesday, January 30, 2008
Labels: digital certificates, Digital Forensics, signitures
Tuesday, 29 January 2008
What is an “Electronic Contract”
by Craig S Wright
When contrasting contractual principles, it is clear that where a contract is not required to be in writing (Columbia Law Review, Apr., 1929 Pp. 497-504; Columbia Law Review, Jun., 1907, pp. 446-449; McKendrick, E, 2005, p 184), that little additional uncertainty could be created where the contract is completed electronically. In fact, it is clear that electronic evidence must hold greater weight than verbal evidence (Lord Justice Auld, Sept 2001, Cpt 11). What is not clear is the extent of the weight attached to the various forms of electronic evidence. The strength of a digital signature algorithm and the security surrounding the mechanisms used to sign an electronic document will respectively influence the weight associated with any piece of electronic evidence.
It has been argued that the digital contract may appear on the computer screen to consist of words in a written form but merely consist of a virtual representation (Allison et al, 2003). The ECA has removed the uncertainty and doubt surrounding the question as to the nature of electronic form used in the construction of a contract. In this, the ECA specifies that the electronic form of a contract is to be accepted as equivalent to a contract in writing.
An electronic contract has a twofold structure. Thought of electronically, the contract is a sequence of numbers and code saved to some electronic or magnetic medium. Alternatively, the contract becomes perceptible through a transformation of the numeric code when broadcast to a computer output device such as a printer or screen(Bainbridge, 2000; Reed, 2004; Brownsword, 2000). Prior to the passing of the ECA, this dichotomy exasperated the uncertainty contiguous with whether an electronic contract can be regarded as being a contract in writing.
The English legal doctrines of offer, acceptance and consideration when coupled with an intention to create legally binding relations define the necessary conditions for the creation of a contract. There is no necessity for the most part [Excluding contracts such as for the transfer of real property, which are covered by a variety of specific acts] that any contract be concluded in writing.
The question as to whether contracts performed electronically are legalistically equivalent to writing comes more to a question of evidential weight and the application of the parole evidence rule (Durtschi, 2002; Lim, 2002). By stating that electronic contracts are equivalent to writing, the ECA has in effect, forbid the introduction of extrinsic evidence which could change the terms of the electronic contract.
The question would remain as to a determination of whether the electronic communications contain the final agreement between the parties. Where some, though not all, of the terms are agreed in the electronic communication, a partial integration will result in the allowing of extrinsic evidence (Treitel, 2003).
The ECA did little to suppress the disputes surrounding the evidential weight attached to an electronic signature due to the receipt of a number of objections [Eg., London Borough of Newham for the National Smart Card Project (2003)] prior to the passing off the bill. Accordingly, when the Act was passed on 25 May 2000 its provisions as to the weight of electronic signatures did not meet the objectives of the EC Directive on Electronic Signatures and where less detailed. Section 7(1) provides:
'In any legal proceedings-
(a) an electronic signature  incorporated into or logically associated with a particular electronic communication or particular electronic data, and
(b) the certification  by any person of such a signature, shall each be admissible in evidence in relation to any question as to the authenticity of the communication or data or as to the integrity of the communication or data.'
Posted by Craig Wright at Tuesday, January 29, 2008
Wall Street Clowns and Their Models
Recently I cited an Economist article in Economist on the Peril of Models. While walking through the airport this Businessweek cover story, Not So Smart, caught my eye. I found the following excerpts to be interesting.
The titans of home loans announced they had perfected software that could spit out interest rates and fee structures for even the least reliable of borrowers. The algorithms, they claimed, couldn't fail...
It was the assumptions and guidelines that lenders used in deploying the technology that frequently led to trouble, notes industry veteran Jones. "It's garbage in, garbage out," he says. Mortgage companies argued their algorithms provided near-perfect precision. "We have a wealth of information we didn't have before," Joe Anderson, then a senior Countrywide executive, said in a 2005 interview with BusinessWeek. "We understand the data and can price that risk."
But in fact, says Jones, "there wasn't enough historical performance" related to exotic adjustable-rate loans to allow for reasonable predictions. Lenders "are seeing the results of not having that info now..."
At this point in probably sounds like I am seriously anti-model. That isn't really the case. The points I cited from Businessweek involve inserting arbitrary values into models. Non-arbitrary data is based on some reality, such as "historical performance" for an appropriate past period, looking forward into an appropriate future period.
Incidentally, one of the articles I read cited the Intangible Asset Finance Society, which is "dedicated to capturing maximum value from intellectual properties and other intangible assets such as quality, safety, security; and brand equity." That sounds like something to review.
Labels: clowns, favorites, philosophy, threat model
Posted by Richard Bejtlich at 00:19 on Saturday, September 01, 2007
Craig S Wright
The Charity I Support
Uniting Care Burnside
A just and safe society for all children young people and families - because children matter.
What I do
Each year I donate a christmas party for the families in the NSW Hastings region supported by Burnside.
We have been doing this for a while now.
On top of this I recycle computers. To do this I take 1.5 to 2 year old corporate lease computers and refurbish them so that they can run the most current programs.
The question is - what do you do to help?
If you do not have the time, have you though about a donation?
This blog has been monetarised. This is where the money goes. By clicking and purchasing on this site, you help Burnside and Hackers for Charity. All monies earned here are split 50/50 between these two charities.
"Absolute security does not exist and nor can it be achieved. The statement that a computer is either secure or not is logically falsifiable (Peisert & Bishop, 2007), all systems exhibit a level of insecurity."
Craig Wright Is GAY!!
Craig Is Gay!!! I wrote this song for him, as he is a GAY!!
Published by azzawazza1992 on 9 Aug 2007
Subject: FW: Why Easy To Use Software Is Putting You At Risk
From: "Craig Wright" <cwright () bdosyd ! com ! au>
Date: 2006-02-23 23:10:40
Strange that you should pick on architecture. We have the fall of a piece of major \ architecture today which as killed a large number of people. The 2nd by the same \ person. The twin towers failed due to structural deficiencies more than the planes. \ Do you wish for me to quote the statistics on architectural failure? They are greater \ than you may think.
You seem to make the simplification that all code can be written correctly and \ tested. That no matter how long and complex there is a way of determining the error \ rate - this is wrong and I shall get to this in the post. I will even help you \ develop an argument that you may use to dispute me.
The majority of libraries used in development (excluding open source eg Linux) are \ complied object code. Are you expecting that the world stop using all code unless \ they have the source? That all source be checked?
Dijkstra developed the method "correct by construction". He also did extensive work \ on the mathematical proof of algorithms. Please read the works below.
Kert Godel, Alan Turning and Alonzo Church (GTC) did work which resulted in \ "Computability Theory". They discovered that certain basic problems cannot be solved \ by computers. Cohen, Hollingworth and Dijkstra all developed this theory further.
Now I stated I would get to error determination. GTC demonstrated in computational \ theory that it is not possible to create a machine that can determine wether a \ mathematical statement is true or false. All code and programming is a mathematical \ statement or algorithm. The determination of the codes function is a mathematical \ proof (see Cohen and Dijkstra).
As it is not possible for either an automata or turning machine to determine the \ correctness of the programme, it is not possible to determine the effects of code.
Dijkstra's started work on formal verification (what you are calling for) in the \ 1970's. Formal verification was the prevailing opinion at the time. This was that one \ should first write a program and then provide a mathematical proof of correctness.
"The Cruelty of Really Teaching Computer Science" (Dijkstra, 1988) saw Dijkstra \ trying to push computable correctness. This missed the need for engineers to \ compromise on the one hand with the physical world and on the other with cost \ control.
This is the issue. To move ahead and develop code that people want we can not \ complete mathematical software verifications. No machine (at least yet known) can \ verify code. The term machine refers to the computer science idea of a machine - not \ a physical item.
To state that all code should be verified would be great for myself. I am a \ mathematician. Computers can not verify code (see the theory of computation). This \ would make my mathematical skills in greater demand and help next time I go for a \ raise.
I seem to be adding facts to the discussion. Dijkstra, Turing et al are the people \ who created the foundations of computer science.
Please feel free to add comment on the use of finite state machines, labelled \ transition systems, Petri nets, timed automata, hybrid automata, process algebra, \ formal semantics of programming languages such as operational semantics, denotation \ semantics, Hoare's logic or any other existing method of computational verification.
I have attached a paper of Dijkstra's. This paper could act as a foundation for your \ argument. Dijkstra argues for formal verification against software engineering. \ Please feel free to build on the argument - if you manage to come up with something \ that is verifiably valid not only will you get to have one up on me you may be \ remembered in years to come in the computer science discipline.
DEF CON 13 - Dan Kaminsky,
Black Ops of TCP 2005
Published on Feb 7, 2014
Another year, another batch of packet related stunts. A preview:
A Temporal Attack against IP
It is commonly said that IP is a stateless protocol. This is not entirely true. We will discuss a mechanism by which IP's limited stateful mechanisms can be exploited to fingerprint operating systems and to evade most intrusion detection systems.
Application-layer attacks against MD5
We will show how web pages and other executable environments can be manipulated to emit arbitrarily different content with identical MD5 hashes.
Real time visualizations of large network scans
Building on Cheswick's work, I will demonstrate tools for enhancing our comprehension of the torrential floods of data received during large scale network scans. By leveraging the 3D infrastructure made widely available for gaming purposes, we can display and animate tremendous amounts of data for administrator evaluation.
A High Speed Arbitrary Tunneling Stack
Expanding on last year's talk demonstrating live streaming audio over DNS, I will now demonstrate a reliable communication protocol capable of scaling up to streaming video over multiple, arbitrary, potentially asymmetric transports.
Dan Kaminsky, also known as Effugas, is a Senior Security Consultant for Avaya's Enterprise Security Practice, where he works on large-scale security infrastructure. Dan's experience includes two years at Cisco Systems designing security infrastructure for large-scale network monitoring systems.
He is best known for his work on the ultra-fast port scanner scanrand, part of the "Paketto Keiretsu", a collection of tools that use new and unusual strategies for manipulating TCP/IP networks. He authored the Spoofing and Tunneling chapters for "Hack Proofing Your Network: Second Edition", was a co-author of "Stealing The Network: How To Own The Box", and has delivered presentations at several major industry conferences, including Linuxworld, DefCon, and past Black Hat Briefings.
Dan was responsible for the Dynamic Forwarding patch to OpenSSH, integrating the majority of VPN-style functionality into the widely deployed cryptographic toolkit. Finally, he founded the cross-disciplinary DoxPara Research in 1997, seeking to integrate psychological and technological theory to create more effective systems for non-ideal but very real environments in the field. Dan is based in Silicon Valley.
In 2005 Bernard NotHaus attended a hackers convention in Amsterdam. At that conference he met a younger man who called himself "Satoshi Nakamoto". The man later spoke to Joseph Vaughn Perling. In that conversation the man described what was to later become bitcoin. Bernard disclosed this in an interview in Feb 2014.
Work - in - Progress
Triple Entry Accounting
Date: 2005/12/25 23:04:21
First Known Satoshi Nakamoto Sighting
[...] In between speeches at the 2005 "What the Hack" event during a rare period without much rain, I went out to grab a free toastie which were given out with a free anonymous domain registration at a path-side cart at this hacker festival in Liempde. I had been putting my video camera to use grabbing some random shots of the tents, the environment, the people ....
Dr. Craig S Wright's dissertation:
Doctor of Theology in Comparitive Religous and Classical Studies (sic!)
Doctor of Theology in Comparative Religious and Classical Studies
"Gnarled roots of a creation theory"
Sourced from Dr. Craig S Wright's LinkedIn profile:
Respected executive and technology leader delivering proven ability to capitalize on enterprise-level technologies and pioneering strategies. A sought-after internationally recognized author and public speaker, delivering solutions to government and corporate departments in SCADA security, Cyber Security and Cyber Defense, as well as leading the uptake of IPv6 and Cloud technologies. Drives innovative strategies that result in the strategic redevelopment and invigoration of both startups and established firms. Futurist, thought leader and expert with proven innovation in program leadership, execution design and strategic redevelopment.
1998 - 2003
Doctor of Theology in Comparitive Religous and Classical Studies - Guess (I am an ex-chatholic who is now involved in the UC) Ask me and I may share. I act as a lay pastor and I do not always desire to argue with people who have no concept of religion. I was a catholic, became an atheist, and moved towards the uniting church as I learnt more in science and mathematics.If you need to ever need to know of Dionysus, Vesta, Menrva, Ceres (Roman Goddess of the Corn, Earth, Harvest) or other Mythological characters - I am your man. I could even hold a conversation on Eileithyia, the Greek Goddess of Childbirth and her roman rebirth as Lucina.I bet you did not know that Asklepios Aesculapius is the Greek God of Health and Medicine or that Lucifer is the name of the Roman Light-bearer, the God and Star that brings in the day. Extra-activity: A comparitive study of Greko-Roman foundations to the Judeo-Christian origins of the Eve belief and myth structure.If you are really lucky (or unlucky as the case my be) I may let you read my dissertation:"Gnarled roots of a creation theory".
PipeNet 1.1 and b-money
I've discovered some attacks against the original PipeNet design. The new
protocol, PipeNet 1.1, should fix the weaknesses. PipeNet 1.1 uses layered
sequence numbers and MACs. This prevents a collusion between a receiver
and a subset of switches from tracing the caller by modifying or swaping
packets and then watching for garbage.
A description of PipeNet 1.1 is available at
Also available there is a description of b-money, a new protocol for
monetary exchange and contract enforcement for pseudonyms.
Please direct all follow-up discussion of these protocols to cypherpunks.
The Unintended Consequences of E-Cash
A position paper by Michael Froomkin for the
Panel on 'Governmental and Social Implications of Digital Money'
Computers, Freedom & Privacy Conference (CFP'97)
Burlingame, California, USA
Wed. March 12, 1997, at 2pm.
Panelists: Roger Clarke, David Chaum, Michael Froomkin, and Tim May.
This position paper resides at http://www.law.miami.edu/~froomkin/articles/cfp97.htm
Ver 1.2 Mar 12, 1997. © 1997 A. Michael Froomkin. All Rights Reserved. Permission granted to copy for non-profit uses.
The Unintended Consequences of E-Cash
Electronic cash, broadly defined, includes both smart-card based tokens of value and digital coins or other digital tokens of value (e.g. "digital checks"). [If these distinctions are not meaningful to you, please see Digital Cash: A Technical Menu, at URL http://www.law.miami.edu/~froomkin/articles/oceanno.htm#xtocid583121.]
For the foreseeable future I predict these new payment media will have major economic effects only in the part of the economy that uses small and micro-transactions. Unless legal rules change significantly, consumers who live in jurisdictions that provide legal protections for debit or credit cards transactions will tend to use them for larger payments. After all, why use e- cash when you can use a credit card and cancel the payment if the seller fails to deliver what was promised? Thus, e-cash will have relatively modest overall effects on commerce, the money supply, and the economy. And e-cash will have equally modest effects on taxes and social mores.
The (Mostly) Bad News
I predict that most of the social effects of e-cash will be negative, in part due to the (over) reaction from regulatory authorities, in part due to the ways in which corporations will be able to amass more data on consumers. Government action will (largely unintentionally) exacerbate the negative effects of e-cash on privacy:
1996-09-17 - Re: Risk v. Charity (was: RE: Workers Paradise. /Political rant
To: Black Unicorn <email@example.com>
Message Hash: dfc75b84234ea728c6c34cb022dde758d44d5f9fb266d9a00092f593a9c93c6b
Message ID: <199609170703.RAA21552@mac.ce.com.au>
Reply To: N/A
UTC Datetime: 1996-09-17 11:38:06 UTC
Raw Date: Tue, 17 Sep 1996 19:38:06 +0800
Date: Tue, 17 Sep 1996 19:38:06 +0800
To: Black Unicorn <firstname.lastname@example.org>
Subject: Re: Risk v. Charity (was: RE: Workers Paradise. /Political rant
Personally, I paid my way through uni...full fees. I took out a loan
when I developed cancer to pay for it (as the health insurance was
not finalised for aproval - so they got out of paying). The few
months I was unemployed after I left the military because of a
confict of interests I earned money by doing whatever I could get
(even though I am an engineer I have worked in a petrol station). So
why and for what reason sould I have to pay several 10's of thousands
each year to support others. I have never taken help from the
govenment, I do not feel I should have to pay as well.
And what am I paying for...to protect the status quo. I believe that
there is more than enough help for ppl available. They just need to
get off their butts and work.
> > email@example.com (Timothy C. May) wrote:
> > >"Saving for a rainy day," whether saving, investing, getting an education
> > (while others are out partying), preparing, etc., all takes effort and
> > commitment. If those who save and prepare are then told they have to pay
> > high taxes to support those who partied....well, the predictable effect
> > [...] is _more_ people in agony. When you tell people that a compassionate
> > society will meet their basic needs, a predictable fraction of them will choose
> > not to work hard and prepare themselves.
> > Two questions, two observations:
> > Do you have health insurance?
> > Do you have life insurance?
> Yes, so?
> Yes, so?
Myself also yes,yes
> > I have commented on your line of reasoning before and and it still
> > seems to me that an important part of the discussion is missed.
> > Specifically, that anyone can "save for a rainy day" and still not be
> > able to provide for events that can always happen: Heart attack, stroke,
> > car accident, pinched nerve that leaves you in excruciating pain and
> > unable to work for several years.
> Understand what it is you are saying.
,'~``. \|/ ,'``~.
(-o=o-) (@ @) ,(-o=o-),
| Soon, we may all be staring at our computers, wondering |
| whether they're staring back. |
| [Network Admin For WPA Business Products. aka doshai >;-) ] |
| .oooO http://pip.com.au/~doshai/ Oooo. |
| ( ) Oooo. .oooO ( ) |
+-----\ (----( )-------oooO-Oooo--------( )--- ) /---------+
\_) ) / \ ( (_/
Key fingerprint = 2D F4 54 BB B4 EA F1 E7 B6 DE 48 92 FC 8D FF 49
Send a message with the subject "send pgp-key" for a copy of my key.
(if I want to give it to you)
(That's What I Want)
Author: Steven Levy
The killer application for electronic networks isn't video-on-demand. It's going to hit you where it really matters - in your wallet. It's not only going to revolutionize the Net, it will change the global economy.
Clouds gather over Amsterdam as I ride into the city center after a day at the headquarters of DigiCash, a company whose mission is to change the world through the introduction of anonymous digital money technology. I have been inundated with talk of smart cards and automated toll takers and tamper-proof observer chips and virtual coinage for anonymous network ftps. I have made photocopies using a digital wallet and would have bought a soda from a DigiCash vending machine, but it was out of order.
My fellow passenger and tour guide is David Chaum, the bearded and ponytailed founder of DigiCash, and the inventor of cryptographic protocols that could catapult our currency system into the 21st century. They may, in the process, shatter the Orwellian predictions of a Big Brother dystopia, replacing them with a world in which the ease of electronic transactions is combined with the elegant anonymity of paying in cash.
He points out the plaza where the Nazis rounded up the Jews for deportation to concentration camps.
This is not idle conversation, but a topic rooted in the Chaum Weltanschauung - state repression extended to the maximum. David Chaum has devoted his life, or at least his life's work, to creating cryptographic technology that liberates individuals from the spooky shadows of those who gather digital profiles. In the process, he has become the central figure in the evolution of electronic money, advocating a form of it that fits neatly into a privacy paradigm, whereby the details of people's lives are shielded from the prying eyes of the state, the corporation, and various unsavory elements.
WIRED Magazine 1994
- Adam Back - 90s Cypherpunk, Inventor of HashCash, CEO of Blockstream
- David D. Friedman - Son of Milton, Anarcho-Capitalist theorist, Not a Cypherpunk but "Crypto-Anarchy" draws a lot from his work
- Eric Hughes - Founding member of the Cypherpunk Mailing List,
- Gregory Maxwell - Bitcoin Core Developer, Blockstream CTO, Controversial figure to the Big Block political faction
- Hal Finney - 90s Cypherpunk, Received first Bitcoin transaction (from Satoshi), Strong candidate for Satoshi
- Ian Grigg - 90s Cypherpunk, Inventor of Ricardian Contracts
- Jim Bell - 90s Cypherpunk, Crypto-anarchist, Author of Assassination Politics
- John Gilmore - Co-founder of the Cypherpunk Mailing List and the Electronic Frontier Foundation
- John Perry Barlow - 90s Cypherpunk, Author of "A Declaration of the Independence of Cyberspace"
- Julian Assange - Founder of Wikileaks, Member of Cypherpunk Mailing List in its heyday
- Nick Szabo - 90s Cypherpunk, Creator of Bitcoin Precursor BitGold, Prolific writer of many important papers, Strong candidate for Satoshi
- Paul Calder Le Roux - Author of E4M disk-encryption software, Suspected author of TrueCrypt, Former criminal empire boss (in a very Crypto-Anarchist sense), DEA informant, Currently in US Custody
- Phil Zimmermann - Author of the Pretty Good Privacy (PGP) Public key encryption software
- Pieter Wuille - Bitcoin Core Developer, Holds the number 2 spot on the bitcoin/bitcoin contributors list (2018-05-28), Blockstream co-founder
- Satoshi Nakamoto - Pseudonymous Founder of Bitcoin, May be a person or group, Many people put forward as possible candidates, Very probably the alias of a 90s Cypherpunk
- Timothy C. May - Founding member of Cypherpunk Mailing List, writer of "The Crypto-Anarchist Manifesto" and the "Cyphernomicon" (mailing list FAQ)
- Vinay Gupta - 90s Cypherpunk, Inventor of the Hexayurt, Resilience Guru, Involved with Ethereum
- Wei Dai - 90s Cypherpunk, Cryptographer, Creator of Bitcoin-precursor B-Money
“I don’t believe we shall ever have a good money again before we take the thing out of the hands of government. That is, we can’t take them violently out of the hands of government. All we can do is by some sly roundabout way introduce something that they can’t stop.” – Friedrich Hayek, 1976