The security of the Bitcoin block chain fundamentally depends on its
proof-of-work, based on calculating SHA256 hashes. The network of miners use
their SHA256 computational power to vote on which block chain to trust in order
to confirm transactions. They are incentivized to do this by being rewarded
with transaction fees as well as newly mined coins. Because computing power
cannot be faked, voting cannot be cheated. This is in essence what makes Bitcoin's
However some say this SHA256 proof-of-work used in mining consumes too much
energy and is a "huge waste" or "unsustainable".
I strongly disagree. Here is why:
When I recently visited China for the first time,
as an InfoSec professional I was very curious to finally be able to poke at the
Firewall of China with my own hands to see how it works and how easy it is evade.
In short I was surprised by:
- Its high level of sophistication such as its ability to exploit side-channel leaks in TLS
(I have evidence it can detect the "TLS within TLS" characteristic of secure web proxies)
- How poorly simple Unix computer security tools fared to evade it
- 2 of the top 3 commercial VPN providers in China uses RSA keys so short
(1024 bits!) that the Chinese government could factor them
I lost an hour solving a problem about embedding a video file in PowerPoint for my wife, due to a subtle compatibility issue between avconv (Libav) and PowerPoint. If any brave soul wants to investigate and narrow down the bug, I provide enough details here for doing so. I do not have the time to do it.
Friends of mine asked what is the probability of being a "miracle baby", for example a baby born on 9/14 and weighting 9lb14oz? In other words what is the probability that a baby's birthday expressed as MM/DD corresponds to his birthweight expressed as MM pounds and DD ounces?
(Finally in my life I found a use of imperial units: they make theoretical number problems more interesting ;)
Wanting to know how the ongoing North American Drough affects California,
I was looking for a very simple chart: one showing the evolution of the storage
level of the major water reservoirs as a percentage of their capacity.
The website of the California Department of Water Resources has a
map showing the status of the 12 major reservoirs, but you will not
find an aggregate graph.
So I made one.
I scraped the monthly storage level of 11 of the 12 major reservoirs ,
for the last 40 years in order to include data from the
California Drought of 1976
and 1977. For each month, I calculated the aggregate storage level
(sum of the water level of each reservoir, divided by the sum of their
capacity). The result is below:
We can clearly see the yearly wet/dry seasons.
As of September 2014, the reservoirs are at 27.6% capacity in
By this metric, this is the worse in 37 years since the 1976-1977 drought where
their aggregate storage level dropped to 14.6% in October 1977 (California
had much worse water management policies at this time - you can read a
report about it).
From looking at the graph, one can guess that if the upcoming winter does
not bring abundant rainfalls or snowfalls, then the drought is going to
be as bad as, or worse than the 1976-77 drought by the end of the summer
of 2015. This is rather scary.
 For some reason the Exchequer reservoir (EXC) does not have a monthly
data feed —only daily/hourly and I was too lazy to adjust my scripts
for this peculiarity.
Proof-of-concept for a super-distributed CDN storing data in DNS records
I wrote a Chrome extension that uses DNS instead of HTTP to fetch web content.
It implements the fake TLD .cdn53: when visiting http://zorinaq.com.cdn53
the extension intercepts the request, sends a DNS query for the TXT record for
"_cdn53.zorinaq.com" and the response contains the HTML content, as simple as
It also works for URLs containing a path (http://zorinaq.com.cdn53/foo/bar). It
works with relatively large resources (each TXT record can contain up to ~65
A system like CDN53 has quite a few advantages over HTTP:
"This blew my mind. Why the f*ck isn't this being done yet?" —Reddit comment
An idea hit me: email is such a pervasive tool used by so many people and
supported by so many software stacks that it is the ultimate vendor-neutral
platform to build programmatic services between applications and persons:
automatic exchange of contact information, PGP key, Bitcoin address,
automating two-factor authentication, and so much more!
Here is an example: imagine you sign up on a web forum as
email@example.com. The site could send you a "special hidden email" to request
your preferred avatar image, nickname, language, and timezone. And your mailbox
automatically replies (like a vacation autoresponder) with the information
formatted in a specific way, so that there is no need to re-enter the same
information over and over on every web forum! (Like
Gravatar but decentralized.)
This would happen completely transparently, behind the scenes, without you
even seeing the automatic email exchange.
Another example: encrypted email has failed to see wide adoption. Why? Because
none of the setup steps are automated, so its use is cumbersome. First
you have to ask the recipient if he even knows what PGP is, then if he wants to
use encrypted email or not, and where his key can be obtained (key servers help
but not always.) Instead imagine if the first time you composed an email to a
new friend and hit "send", the email application would first send a "special
hidden email" to your friend asking for his PGP key. Your friend's mailbox
automatically replies "here is my key" (or "no key was configured"). And your
mailbox receives the attached key, uses it to encrypt the email you composed,
and saves it for future needs! With email encryption being negotiated
automatically like that, its use would be more widespread.
This concept is what I call programmable email: email requests sent and replied to, automatically, in order to exchange personal information in a well-defined format without relying on a central database. In other words, this is a decentralized application programming interface (API) to personal information.
In the last hour, around November 19 00:30 UTC, the value of a single Bitcoin on the world's largest Bitcoin exchange, BTCChina, rose to 1,000 US dollars (USD), or 6092.00 Chinese yuan (CNY).
Let me repeat this: 1 bitcoin is worth $1,000.
As I write these words, the exchange rate is about ¥6900 or $1100 and continues to increase. Other exchanges are a bit behind ($900 for MtGox and $750 on Coinbase), but arbitrage is taking place, so they should reach that level within the next few hours, assuming no crash.
I have been telling people since 2010 that Bitcoin is a revolutionary technology: the world's first decentralized, censorship-resistant, inflation-resistant, digital currency. With an overwhelming positive tone from today's first US senate hearing on virtual currencies and Bitcoin in particular, Bitcoin's future has never before been so promising.
I personally value a system like Bitcoin to at the very least the size of the remittance market (Western Union, MoneyGram, etc), or the size of the gold market. And possibly a lot more than that. This lower bound sets the worth of Bitcoin to very roughly $100 to $1,000 billion. With a maximum theoretical limit of 21 million bitcoins, this sets the worth of 1 bitcoin to $5,000 to $50,000. Bitcoin will remain very volatile in the near future. Sure a crash that would bring it back down to below $1,000 is possible or likely, but it would not change its long-term prospect, a few years from now, from being valued between $5,000 and $50,000 and possibly more.
Displaying entries 1-8 of 81 |
Next page »