A Blog from Gerbsman Partners Board of Intellectual Capital on “Maximizing Enterprise Value” for technology, life science, medical device and cleantech companies and their Intellectual Property
Andy Rubin at the Wired Business Conference with the new Essential Phone.Getty Images for Wired
Andy Rubin is best known as the guy who created Android, sold it to Google, and nurtured it into the most popular smartphone operating system on the planet.
But Rubin left Google back in 2014, and now he’s on his own.
His latest gig is Essential, a startup he runs as CEO that’s trying to become a new kind of gadgets company. It starts with a phone, called the Essential PH-1, and the plan is to expand into smart appliances and cars from there.
Rubin spoke Wednesday at the Wired Business Conference in New York and shed a bit more light on Essential’s plans. After the Essential Phone launches this summer, the company plans to release Home, a voice-controlled hub for all the connected appliances in your house. Rubin claims Home will be compatible with the wide variety of smart home platforms, ranging from Apple’s HomeKit to Samsung’s SmartThings, even though it’d take a wild level of technical wizardy to pull it off. Many are skeptical he can pull it off. He calls this new platform Ambient OS.
Beyond that, Rubin teased that he’d like Essential to tackle the car, which is increasingly coming into focus as an area of growth for tech companies.
And questions remain how the Essential Phone, which costs $699, can find success in a market dominated by Apple and Samsung.
Business Insider and some other members of the press spoke with Rubin following his Wired talk. Below is a transcript of that conversation, which has been edited for length and clarity. (I’ve labeled each journalist’s question as just “Question” since so many people were in the room asking questions. I put my name on the questions I asked.)
Q&A with Andy Rubin, CEO of Essential
Steve Kovach: I want to talk more about Ambient OS. You were talking a lot about how you’re really confident you’re going to be able to stitch all these various platforms together.
Andy Rubin: I didn’t say I was confident. I’m definitely going for it.
Kovach: If I’m understanding it correctly, especially with Apple, it’s actually impossible. What they allow people to build into now doesn’t allow what you want to do. Does this thing fall apart if they say no to you?
Rubin: You have to understand this approach. There’s a client and a server. And what Apple has with HomeKit is a bunch of individual consumer electronics companies enabling HomeKit with their products. I don’t know what the percentages are, but they don’t all only speak HomeKit. They speak a whole lot of other stuff as well. And what Apple is trying to do is trying to be the screen that drives these things. And that’s excluding anybody in “Android Land” or Windows from driving those things. So the natural effect will be for those companies to support other products as well, and they’re the ones that are plugging into Apple’s APIs. So the trick that I talked about on stage is: I can produce the same APIs. And I can call it Essential Kit. And those same exact APIs that someone has already developed for their Sonos thing or whatever this point product is, I’m compatible with.
Kovach: But isn’t that just another product like Samsung’s SmartThings?
Rubin: No, no, no, no. You know what this is? This is [like] Windows emulation [on a Mac]. This is Windows emulation for IoT. APIs for all these people who are building these islands. And if I emulate eight things and turn it on, I control 100,000 devices.
Kovach: And have you been able to do that yet in testing?
Rubin: I haven’t launched a product. I’m teasing a product, but it’s going to be awesome. These are all forward-looking statements.
Question: How far in advance are you teasing?
A rendering of what Essential Home will look like.Essential
Rubin: We have round LCDs — big ones. What’s after that is basically everything that’s in a smartphone. Right? There’s a bunch of cool things about starting a company today. I have a system in my lobby where I can print badges for people. There’s some startup company whose job it is to do lobby registration now. And when I used to start companies, those guys didn’t exist. But the other thing that happened, obviously, is smartphones have driven the supply base, based on the volume of the component tree of smartphones. And you’ll find those things going into a lot of products like these home assistant products. So it’s kind of a new era as far as leveraging the economies of scale of smartphones into these other products.
Question: So Essential Home is a touch interface. Is it also a microphone?
Rubin: Yeah it has far-field speech recognition. It has an array of microphones.
Question: Is there any plan to add video chat to something like that?
Rubin: Really good question. So once you do this job of bridging these islands, you kind of rise above all these other UIs, and you become a kind of holistic UI for every other product that might be in your life. So if you think of it purely from a UI perspective: Who is your UI developer? I actually thing developing for smartphones is too difficult. It’s almost like you have to go to school to learn how to be an iOS developer or learn how to be an Android developer. The good ones have four or five years of experience, and the industry is not that old. So the reason we created a new OS is to basically solve the UI problem and redefine the definition of who a developer is. I want the guy who owns the home to be a developer, in some regard. I can tell you today, there’s a $13 billion industry of Crestron or AMX or Control 4, and they’re drilling holes in your wall and installing screens in your home. That’s an outdated approach. But the guys that are doing the UIs for those are the same guys that are drilling the holes in your wall. There’s this whole installer thing with these high-end homes which is not a mass-market consumer value proposition. So I need to change who the installer is. And I think we’ve built enough technology for a consumer to kind of do a drag and drop.
Question: There’s this argument that’s been out there that innovation in smartphones has peaked, that they’ve already gotten so good and can do so many things. Where do you see things going? Where does it go from here?
The Essential Phone.Essential
Rubin: When there’s this duopoly with these two guys owning 40% of the market, this complacency sets in where people are like, “Oh what they’re building is good enough. I’ll just go to them.” And that’s the perfect time to start a company like this, when people are complacent and it needs to be disrupted. And the real answer is you guys and the consumers need to tell me if there’s enough new innovation in [the Essential Phone].
I think the 360 camera and the magnetic accessory bus is a pretty good example of the innovation we’re thinking about. And there’s gonna be a string of those things. Let me broadly position this: In the era where smartphones were new and everyone was upgrading from their feature phone to their smartphone for the first time, the product cycle was every six months. There’d be some new thing coming out, everyone’s excited, there was a bubble kind of feeling that you were involved in something completely new and exciting. And then once everybody who wanted a smartphone got one, we’re in a saturated market — at least in the first world. In saturated markets, the upgrade cycle is every 24 months. And the problem with the 24-month cycle, which happens to snap to the carriers’ [ownership] of the consumer, is the consumer doesn’t get to see the innovation. It’s still happening in the background, but it happens every 24 months in these very lumpy onstage announcements. I think there’s a way, and the reason we built this magnetic connector to continuously produce innovation and show it to the consumer happening in real time. It’s almost like software updates for hardware.
Rubin and his Essential Phone.Getty Images for Wired
Question: Explain how the Essential Phone is different from a modular phone. From the consumer point of view, it’s, “I’m getting this phone and I can snap a camera on and I can snap on a better battery.” Does it matter if it’s magnetic or not?
Rubin: That’s a good question. It’s two things. It’s kind of the core to the way we designed this from a product design perspective. The first one is what’s “modular.” [Google] Ara was the definition of modular, which is you can remove a core component of the phone, like its processor, and replace it with a faster one. We’re not doing that. You buy a phone, the phone works great as a phone. We’re adding stuff onto it. So that’s why I prefer accessory bus as an example. So that’s modular versus accessory.
Now, connectors, in my view, are dumb because they get outdated. So a wireless connector is the holy grail. We’re close to that. We transmit power between two pins and everything else is wireless. Actually, the technology we’re using is wireless USB 3.0. So it’s 10 gigabits a second of USB, and we’ve built these transceivers that do that. The benefit of having a connectorless connector is I don’t suffer from what Moto Mod suffered from, which is every phone they come out with in the future has to have that 33-pin connector in exactly the same location so all the accessories you’ve invested in as a consumer still work. So they’ve painted themselves in the corner. They can never change the industrial design of their next phone because it has to match all these accessories. Or they have to trick the consumer into throwing away all their accessories and getting the new one that fits this new thing.
A completely wireless thing means I can come out with a phone that’s invisible. And as long as it has this magnetic area on it I can use this legacy of accessories that I’ve purchased. Again, this is a pro-consumer brand. It’s not easy to articulate. We’re trying to do right by the consumer where they don’t have to throw away their stuff every time there’s a connector change. Or get some weird dongle. True story, I went and bought one of those beautiful new MacBooks with the OLED TouchBar. And that’s when they changed to the USB-C thing. And in the IT department in my company I needed to plug in to the Ethernet to get the certificate for the new laptop, and I went to the Apple store and I said, “Do you have a USB-C to ethernet dongle?” And they said, “Oh no we don’t have that yet.” So I had to buy a USB-C to Thunderbolt dongle, and a Thunderbolt to Ethernet dongle. So I had two dongles plugged into each other. And that’s the point where I’m just not feeling too good about being a consumer of those products.
Question: Based on the conversations you had today and at the Code Conference, Essential is much more than just a phone company. How do you find that your brand is going to track these consumers?
Rubin: It’s anti-walled garden. We chose Android because that’s a big component of that. We have a team of engineers, a lot of them, doing the job of other people to make our products work with theirs. These other companies, especially the walled gardens, they’re sitting here with their ecosystem and they expect people to come to them. And they get to be the toll gate guys and say “yes” or “no.” So we’re actively going out and making our products work with other people’s products because we know that’s how our consumers want to live.
Kovach: You spoke a lot on stage today about home and the car as major new platforms, but you didn’t mention AR.
Rubin: There’s baby steps into AR, and then there’s all-in. Scoble’s all-in is the shower picture… so the glasses might come later. Cellphones have had augmented reality for a long time…
What the real question is: ‘What is the end product?’ What is the developer going to build with augmented reality? And so far I’ve seen interactive media… movies and game-like movies, where you’re both a participant and a viewer, which I think is a little too mixed reality for me. There’s a lean back where you’re a consumer of this stuff and it happens, or you’re a participant like a game. The mixed part of it hasn’t been proven yet.
I think when consumers are ready to wear things, whether it’s a motorcycle helmet that overlays a map… or if it’s some goggles that they’ll use for a board game… in the end for these big things I think…
One of the problems is the price. It’s just crazy. It’s not ready for prime time. There will be a day where you might have a head mounted display and it costs $199, and you just plug it into your cellphone. And it won’t be ‘I’m wearing this 24 hours a day.’ It’ll be, ‘It’s time to sit down and play Monopoly with the family or something.’ It actually might be more social than what you would do with VR.
Question: Is that why you started the 360 camera? Because it’s a taste of that?
Rubin: This is all speculation, but I’m hoping there’s going to be a format change in the future. I think I can kind of move the needle a little bit in that format change by taking the world’s largest mass-market product and adding something onto it, rather than trying to create something completely new. So it’s more of a slipstream approach.
Please see below – Fashion Show on Saturday – June 17 at the Marriott Marquis in Washington, DC for the benefit of “the Central Mission” and aid in its work for the homeless in the area.
Programmable blockchains in context: Ethereum’s future, by Vinay Gupta
Programmable blockchains in context: Ethereum’s future, by Vinay Gupta
By the end of this article, you’re going to understand blockchains in general (and Ethereum, a next-generation blockchain platform, in particular) well enough to decide what they mean to your life.
Ethereum brings up strong emotions. Some have compared it to SkyNet, the distributed artificial intelligence of the Terminator movies. Others once suggested the entire thing is a pipe dream. The network has been up for a few months now, and is showing no signs of hostile self-awareness — or total collapse.
But if you’re not terribly technical, or technical in a different field it’s easy to stare at all this stuff and think “I’ll get around to this later” or decide to ignore it until the Guardian does a nice feature (e.g., Imogen Heap: saviour of the music industry? article).
But, in truth, it’s not that difficult to understand Ethereum, blockchains, Bitcoin and all the rest — at least the implications for people just going about their daily business, living their lives. Even a programmer who wants a clear picture can get a good enough model of how it all fits together fairly easily. Blockchain explainers usually focus on some very clever low-level details like mining, but that stuff really doesn’t help people (other than implementers) understand what is going on. Rather, let’s look at how the blockchains fit into the more general story about how computers impact society.
As is so often the case, to understand the present, we have to start in the past: blockchains are the third act of the play, and we are just at the beginning of that third act. So we must recap.
SQL: Yesterday’s best idea
The actual blockchain story starts in the 1970s when the database as we currently know it was created: the relational model, SQL, big racks of spinning tape drives, all of that stuff. If you’re imagining big white rooms with expensive beige monoliths watched over by men in ties, you’re in the right corner of history. In the age of Big Iron, big organizations paid big bucks to IBM and the rest for big databases and put all their most precious data assets in these systems: their institutional memory and customer relationships. The SQL language which powers the vast majority of the content management systems which run the web was originally a command language for tape drives. Fixed field lengths — a bit like the 140 character limit on tweets — originally served to let impatient programs fast forward tapes a known distance at super high speed to put the tape head exactly where the next record would begin. This was all going on round about the time I was born — it’s history, but it’s not yet ancient history.
At a higher, more semantic level, a subtle distortion in how we perceive reality took hold: things that were hard to represent in databases became alternately devalued and fetishized. Years passed as people struggled to get the real world into databases using knowledge management, the semantic web, and many other abstractions. Not everything fit, but we ran society on these tools anyway. The things which did not fit cleanly in databases got marginalized, and life went on. Once in awhile a technical counter-current would take hold and try to push back on the tyranny of the database, but the general trend held firm: if it does not fit in the database, it does not exist.
You may not think you know this world of databases, but you live in it. Every time you see a paper form with squares indicating one letter per box, you are interacting with a database. Every time you use a web site, there’s a database (or more likely an entire mess of them) lurking just under the surface. Amazon, Facebook, all of that — it’s all databases. Every time a customer service assistant shrugs and says “computer says no” or an organization acts in crazy, inflexible ways, odds-are there’s a database underneath which has a limited, rigid view of reality and it’s simply too expensive to fix the software to make the organization more intelligent. We live in these boxes, as pervasive as oxygen, and as inflexible as punched cards.
Documents and the World Wide Web
The second act is started by the arrival of Tim Berners-Lee and the advent of the web. It actually starts just a hair before his arrival. In the late 1980s and early 1990s we get serious about computer networking. Protocols like Telnet, Gopher, Usenet and Email itself provide a user interface to the spanning arcs of the early internet, but it’s not until the 1990s we get mass adoption of networked computers, leading incrementally to me typing this on Google Docs, and you reading it in a web browser. This process of joining the dots — “the network is the computer” as Sun Microsystems used to say) — was fast. In the early 1990s, vast numbers of machines already existed, but they were largely stand-alone devices, or connected to a few hundred machines on a university campus without much of a window into the outside world. The software and hardware to do networking everywhere — the network of networks, the internet — took a long time to create, and then spread like wildfire. The small pieces became loosely joined, then tightly coupled into the network we know today. We are still riding the technological wave as the network gets smarter, smaller and cheaper and starts showing up in things like our lightbulbs under names like “the Internet of Things.”
Bureaucracy and machines
But the databases and the networks never really learn to get on. The Big Iron in the machine rooms and the myriads of tiny little personal computers scattered over the internet like dew on a cobweb could not find a common world-model which allowed them to interoperate smoothly. Interacting with a single database is easy enough: forms and web applications of the kinds you use every day. But the hard problem is getting databases working together, invisibly, for our benefit, or getting the databases to interact smoothly with processes running on our own laptops.
Those technical problems are usually masked by bureaucracy, but we experience their impact every single day of our lives. It’s the devil’s own job getting two large organizations working together on your behalf, and deep down, that’s a software issue. Perhaps you want your car insurance company to get access to a police report about your car getting broken into. In all probability you will have to get the data out of one database in the form of a handful of printouts, and then mail them to the company yourself: there’s no real connectivity in the systems. You can’t drive the process from your laptop, except by the dumb process of filling in forms. There’s no sense of using real computers to do things, only computers abused as expensive paper simulators. Although in theory information could just flow from one database to another with your permission, in practice the technical costs of connecting databases are huge, and your computer doesn’t store your data so it can do all this work for you. Instead it’s just something you fill in forms on. Why are we under-utilizing all this potential so badly?
The Philosophy of Data
The answer, as always, is in our own heads. The organizational assumptions about the world which are baked into computer systems are almost impossible to translate. The human factors — the mindsets which generate the software — don’t fit together. Each enterprise builds their computer system in their own image, and these images disagree about what is vital and what is incidental, and truth does not flow between them easily. When we need to translate from one world model to another, we put humans in the process, and we’re back to processes which mirror filling in paper forms rather than genuinely digital cooperation. The result is a world in which all of our institutions seem to be at sixes and sevens, never quite on the same page, and things that we need in our ordinary lives seem to keep falling between the cracks, and every process requires filling in the same damn name and address data, twenty times a day, and more often if you are moving house. How often do you shop from Amazon rather than some more specialized store just because they know where you live?
There are lots of other factors that maintain the gap between the theoretical potential of our computers and our everyday use of the — technological acceleration, constant change, the sheer expense of writing software. But it all boils down to mindset in the end. Although it looks like ones and zeros, software “architects” are swinging around budgets you could use to build a skyscraper, and changing something late into a project like that has similar costs to tearing down a half-made building. Rows upon rows upon rows of expensive engineers throwing away months (or years) of work: the software freezes in place, and the world moves on. Everything is always slightly broken.
Over and over again, we go back to paper and metaphors from the age of paper because we cannot get the software right, and the core to that problem is that we managed to network the computers in the 1990s, but we never did figure out how to really network the databases and get them all working together.
There are three classic models for how people try and get their networks and databases working together smoothly.
First Paradigm: the diverse peers model
The first approach is just to directly connect machines together, and work out the lumps as you go. You take Machine A, connect it over a network to Machine B, and fire transactions down the wire. In theory, Machine B catches them, writes them into its own database, and the job is good. In practice, there are a few problems here.
The epistemological problem is quite severe. Databases, as commonly deployed in our organizations, store fact. If the database says the stock level is 31 units, that’s the truth for the whole of the organization, except perhaps for the guy who goes down to the shelf and counts them, finds the real count is 29, and puts that in the database as a correction. The database is institutional reality.
But when data leaves one database and flows into another, it crosses an organizational boundary. For Organization A, the contents of Database A are operationally reality, true until proven otherwise. But for Organization B, the communique is a statement of opinion. Consider an order: the order is a request, but it does not become a confirmed fact until after the payment clears past the point of a chargeback. A company may believe an order has occurred, but this is a speculation about someone else’s intentions until cold hard cash (or bitcoin) clears all doubts. Up until that point, an “ordered in error” signal can reset the whole process. An order exists as a hypotheses until a cash payment clears it from the speculative buffer it lives in and places it firmly in the fixed past as a matter of factual record: this order existed, was shipped, was accepted, and we were paid for it.
But until then, the order is just a speculation.
The shifting significance of a simple request for new paint brushes flowing from one organization to another, a statement of intention clearing into a statement of fact, is not something we would normally think about closely. But when we start to consider how much of the world, of our lives, run on systems that work much like this — food supply chains, electrical grids, tax, education, medical systems, it’s odd that these systems don’t come to our notice more often.
In fact, we only notice them when something goes wrong.
The second problem with peer connection is the sheer instability of each peer connection. A little change to the software on one end or the other, and bugs are introduced. Subtle bugs which may not become visible until the data transferred has wormed its way deep into Organization B’s internal records. A typical instance: an order was always placed in lots of 12, and processed as one box. But for some reason, one day an order is placed for 13, and somewhere far inside of Organization B, a stock handling spreadsheet crashes. There’s no way to ship 1.083 of a box, and The Machine Stops.
This instability is compounded by another factor: the need to translate the philosophical assumptions — in fact, the corporate epistemology, of one organization into another organization’s private internal language. Say we are discussing booking a hotel and car rental as a single action: the hotel wants to think of customers as credit card numbers, but the car rental office wants to think of customers as driving licenses. A small error results in customer misidentification, comedy as customers are mistakenly asked for their driving license numbers to confirm hotel room bookings — but all anybody knows of the error is “computer says no” when customers read back their credit card details with no idea that the computer now wants something else.
If you think this is a silly example, the Mars Climate Orbiter was lost by NASA in 1999 because one team was using inches, and the other, centimeters. These things go wrong all the time.
But over a wire, between two commercial organizations, one can’t simply look at the other guy’s source code to figure out the error. Every time two organizations meet and want to automate their back end connections, all these issues have to be hashed out by hand. It’s difficult, and expensive, and error prone enough that in practice companies would often rather use fax machines. This is absurd, but this is how the world really works today.
Of course, there are attempts to clarify this mess — to introduce standards and code reusability to help streamline these operations and make business interoperability a fact. You can choose from EDI, XMI-EDI, JSON, SOAP, XML-RPC, JSON-RPC, WSDL and half a dozen more standards to assist your integration processes.
Needless to say, the reason there are so many standards is because none of them work properly.
Finally, there is the problem of scaling collaboration. Say that two of us have paid the upfront costs of collaboration and have achieved seamless technical harmony, and now a third partner joins our union. And now a fourth, and a fifth. By five partners, we have 13 connections to debug. Six, seven… by ten the number is 45. The cost of collaboration keeps going up for each new partner as they join our network, and the result is small pools of collaboration which just will not grow.
Remember, this isn’t just an abstract problem — this is banking, this is finance, medicine, electrical grids, food supplies, and the government.
Our computers are a mess.
Hub and Spoke: meet the new boss
One common answer to this quandary is to cut through the exponential (well, quadratic) complexity of writing software to directly connect peers, and simply put somebody in charge. There are basically two approaches to this problem.
The first is that we pick an organization — VISA would be typical — and all agree that we will connect to VISA using their standard interface. Each organization has to get just a single connector right, and VISA takes 1% off the top, and makes sure that everything clears properly.
There are a few problems with this approach, but they can be summarized with the term “natural monopoly.” The business of being a hub or a platform for others is quite literally a license to print money for anybody that achieves incumbent status in such a position. Political power in the form of setting terms of service and negotiating with regulators can be exerted, and over all an arrangement that might have started with an effort to create a neutral backbone rapidly turns into being clients of an all-powerful behemoth without which one simply cannot do business.
This pattern recurs again and again in different industries, at different levels of complexity and scale, from railroads and fibre optics and runway allocation in airports through to liquidity management in financial institutions.
In the database context, there is a subtle form of the problem: platform economics. If the “hub and spoke” model is that everybody runs Oracle or Windows Servers or some other such system, and then relies on these boxes to connect to each-other flawlessly because, after all, they are clone-like peas in a pod, we have the same basic economic proposition as before: to be a member of the network, you rely on an intermediary who charges whatever they like for the privilege of your membership, with this tax disguised as a technical cost.
VISA gets 1% or more of a very sizeable fraction of the world’s transactions with this game. If you ever wonder what the economic upside of the blockchain business might be, just have a think about how big that number is.
Protocols — if you can find them
The protocol is the ultimate “unicorn.” Not a company that is worth a billion dollars two years after it was founded, but an idea so good that it gets people to stop arguing about how to do things, and just get on with it and do them.
The internet runs on a handful of these things: Sir Tim Berners Lee’s HTTP and HTML standards have worked like magic, although of course he simply lit the fire and endless numbers of technologists gave us the wondrous mess we know and love now. SMTP, POP and IMAP power our email. BGP sorts out our big routers. There are a few dozen more, increasingly esoteric, which run most of the open systems we have.
A common complaint about tools like Gchat or Slack is that they do jobs which have perfectly great open protocols in play (IRC or XMPP) but do not actually speak those protocols. The result is that there is no way to interoperate between Slack and IRC or Skype or anything else, without going through hacked together gateways that may or may not offer solid system performance. The result is a degradation of the technical ecosystem into a series of walled gardens, owned by different companies, and subject to the whims of the market.
Imagine how much WikiPedia would suck by now if it was a start up pushing hard to monetize its user base and make its investors their money back.
But when the protocol gambit works, what’s created is huge genuine wealth — not money, but actual wealth — as the world is improved by things that just work together nicely. Of course, SOAP and JSON-RPC and all the rest aspire to support the formation of protocols, or even to be protocols, but the definitional semantics of each field of endeavor tend to create an inherent complexity which leads back towards hub and spoke or other models.
Blockchains — a fourth way?
You’ve heard people talking about bitcoin. Missionary chaps in pubs absolutely sure that something fundamental has changed, throwing around terms like “Central Bank of the Internet” and discussing the end of the nation state. Sharply dressed women on podcasts talking about the amazing future potential. But what’s actually underneath all this? What is the technology, separated from the politics and the future potential?
What’s underneath it is an alternative to getting databases synchronized by printing out wads of paper and walking it around. Let’s think about paper cash for a moment: I take a wad of paper from one bank to another, and the value moves from one bank account — one computer system — to another. Computer as paper simulator, once again. Bitcoin simply takes a paper-based process, the fundamental representation of cash, and replaces it with a digital system: digital cash. In this sense, you could see bitcoin as just another paper simulator, but it’s not.
Bitcoin took the paper out of that system, and replaced it with a stable agreement (“consensus”) between all the computers in the bitcoin network about the current value of all the accounts involved in a transaction. It did this with a genuinely protocol-style solution: there’s no middleman extracting rents, and no exponential system complexity from a myriad of different connectors. The blockchain architecture is essentially a protocol which works as well as hub-and-spoke for getting things done, but without the liability of a trusted third party in the center which might choose to extract economic rents. This is really a good, good thing. The system has some magic properties — same agreed data on all nodes, eventually — which go beyond paper and beyond databases. We call it “distributed consensus” but that’s just a fancy way of saying that everybody agrees, in the end, about what truth (in your bank balance, in your contracts) is.
This is kind of a big deal.
In fact, it breaks with 40 years of experience of connecting computers together to do things. As a fundamental technique, blockchains are new. And in this branch of technology, genuinely new ideas move billions of dollars and set the direction of industries for decades. They are rare.
Bitcoin lets you move value from one account to another without having to move either cash or go through the baroque wire transfer processes that banks use to shuffle numbers because the underlying database technology is new, modern and better: better services through better technology. Just like cash it is anonymous and decentralized, and bitcoin bakes in some monetary policy and issues the cash itself: a “decentralized bank.” A central bank of the internet, if you will.
Once you think of cash as a special kind of form, and cash transactions as paper shuffling to move stuff around in databases, it’s pretty easy to see bitcoin clearly.
It’s not an exaggeration to say that Bitcoin has stated us on the way out of a 40 year deep hole created by the limits of our database technology. Whether it can effect real change at a fiscal level remains to be seen.
Ok, so what about Ethereum?
Ethereum takes this “beyond the paper metaphor” approach to getting databases to work together even further than bitcoin. Rather than replacing cash, Ethereum presents a new model, a fourth way. You push the data into Ethereum, it’s bound permanently in public storage (the “blockchain”). All the organizations that need to access that information — from your cousin to your government — can see it. Ethereum seeks to replace all the other places where you have to fill in forms to get computers to work together. This might seem a little odd at first — after all, you don’t want your health records in such a system — and that’s right, you don’t. If you were going to store health records online, you’d need to protect them with an additional layer of encryption to ensure they couldn’t be read — and we should be doing this anyway. It’s not common practice to apply appropriate encryption to private data, that’s why you keep hearing about these enormous hacks and leaks.
So what kinds of things would you like as public data? Let’s start with some obvious things: your domain names. You own a domain name for your business, and people need to know that your business owns that domain name — not somebody else. That unique system of names is how we navigate the internet as a whole: that’s a clear example of something we want in a permanent public database. We’d also like it if governments didn’t keep editing those public records and taking domains offline based on their local laws: if the internet is a global public good, it’s annoying to have governments constantly poking holes in it by censoring things they don’t like.
Crowdfunding as a test bed
Another good example is crowdfunding for projects, as done by places like KickStarter, IndieGoGo and so on. In these systems, somebody puts a project online and gather funds, and there’s a public record of how much funding has flown in. If it’s over a certain number, the project goes live — and we’d like them to document what they did with the money. This is a very important step: we want them to be accountable for the funds they have taken in, and if the funds aren’t sufficient, we want them returned where they came from. We have a global public good, the ability for people to organize and fund projects together. Transparency really helps, so this is a natural place for a blockchain.
So let’s think about the crowdfunding example in more detail. In a sense, giving money to a crowdfunding project is a simple contract:
If the account balance is greater than $10000 then fund the project, and if I contributed more than $50, send me a t-shirt. Otherwise, return all the money.
If you represent this simple agreement as actual detailed code, you get something like this. This is a simple example of a Smart Contract, and smart contracts are one of the most powerful aspects of the Ethereum system.
Crowdfunding potentially gives us access to risk capital backed by deep technical intelligence, and invested to create real political change. If, say, Elon Musk could access the capital reserves of everybody who believes in what he is doing, painlessly selling (say) shares in a future Mars City, would that be good or bad for the future of humanity?
Building the mechanisms to enable this kind of mass collective action might be critical to our future. (e.g. See Coase’s Blockchain youtube video)
Smart Contracts
The implementation layer of all these fancy dreams is pretty simple: a smart contract envisages taking certain kinds of simple paper agreements and representing them as software. You can’t easily imagine doing this for house painting — “is the house painted properly?” is not something a computer can do — yet. But for contracts which are mainly about digital things — think cell phone contracts or airline tickets or similar, which rely on computers to provide service or send you an e-ticket — software already represents these contracts pretty well in nearly all cases. Very occasionally something goes wrong and all the legalese in English gets activated, and a human judge gets involved in a lawsuit, but that’s a very rare exception indeed. Mostly we deal with web sites, and show the people in the system who help us (like airline gate staff) proof that we’ve completed the transaction with the computers, for example by showing them our boarding passes. We go about our business by filling in some forms and computers go out and sort it all out for us, no humans required except when something goes wrong.
To make that all possible today, companies offering those kinds of services maintain their own technical infrastructure — dotcom money pays for fleets of engineers and server farms and physical security around these assets. You can buy off-the-shelf services from people that will set you up an e-commerce website or some other simple case, but basically this kind of sophistication is the domain of the big companies because of all the overheads and technical skill you need before you can have a computer system take money and offer services.
It’s just hard and expensive. If you are starting a bank or a new airline, software is a very significant part of your budget, and hiring a technical team is a major part of your staffing challenge.
Smart Contracts & the World Computer
Sowhat Ethereum offers is a “smart contract platform” which takes a lot of that expensive, difficult stuff and automates it. It’s early days yet, so we can’t do everything, but we are seeing a surprising amount of capability even from the first version of the world’s first generally available smart contract platform.
So how does a smart contract platform work? Just like bitcoin, lots and lots of people run the software, and get a few tokens (ether) for doing it. Those computers in the network all work together and share a common database, called the blockchain. Bitcoin’s blockchain stores financial transactions. Ethereum’s blockchain stores smart contracts. You don’t rent space in a data center and hire a bunch of system administrators. Rather, you use the shared global resource, the “world computer” and the resources you put into the system go to the people whose computers make up this global resource. The system is fair and equitable.
Ethereum is open source software, and the Ethereum team maintain it (increasingly with help from lots of independent contributors and other companies too.) Most of the web runs on open source software produced and maintained by similar teams: we know that open source software is a good way to produce and maintain global infrastructure. This makes sure that there’s no centralized body which can use its market power to do things like jack up the transaction fees to make big profits: open source software (and its slightly more puritan cousin, Free Software) help keep these global public goods free and equitable for everybody
The smart contracts themselves, which run on the Ethereum platform, are written in simple languages: not hard to learn for working programmers. There’s a learning curve, but it’s not different from things that working professionals do every few years as a matter of course. Smart contracts are typically short: 500 lines would be long. But because they leverage the huge power of cryptography and blockchains, because they operate across organizations and between individuals, there is enormous power in even relatively short programs.
So what do we mean by world computer? In essence, Ethereum simulates a perfect machine — a thing which could never exist in nature because of the laws of physics, but which can be simulated by a large enough computer network. The network’s size isn’t there to produce the fastest possible computer (although that may come later with blockchain scaling) but to produce a universal computer which is accessible from anywhere by anybody, and (critically!) which always gives the same results to everybody. It’s a global resource which stores answers and cannot be subverted, denied or censored (See From Cyperpunks to Blockchains video on youtube).
We think this is kind of a big deal.
A smart contract can store records on who owns what. It can store promises to pay, and promises to deliver without having middleman or exposing people to the risk of fraud. It can automatically move funds in accordance with instructions given long in the past, like a will or a futures contract. For pure digital assets there is no “counterparty risk” because the value to be transferred can be locked into the contract when it is created, and released automatically when the conditions and terms are met: if the contract is clear, then fraud is impossible, because the program actually has real control of the assets involved rather than requiring trustworthy middle men like ATM machines or car rental agents.
And this system runs globally, with tens and eventually hundreds of thousands of computers sharing the workload and, more importantly, backing up the cultural memory of who promised what to whom. Yes, fraud is still possible, at the edge of the digitial, but many kinds of outright banditry are likely to simply die out: you can check the blockchain and find out if the house has been sold twice, for example. Who really owns this bridge in Brooklyn? What happens if this loan defaults? All there, as clear as crystal, in a single shared global blockchain. That’s the plan, anyway.
Democratized access to the state of the art
All of this potentially takes the full power of modern technology and puts it into the hands of programmers who are working in an environment not much more complex than coding web sites. These simple programs are running on enormously powerful shared global infrastructure that can move value around and represent the ownership of property. That creates markets, registries like domain names, and many other things that we do not currently understand because they have not been built yet. When the web was invented to make it easy to publish documents for other people to see, nobody would have guessed it would revolutionize every industry it touched, and change people’s personal lives through social networks, dating sites, and online education. Nobody would have guessed that Amazon could one day be bigger than Wal-Mart. It’s impossible to say for sure where smart contracts will go, but it’s hard not to look at the web, and dream.
Although an awful lot of esoteric computer science was required to create a programming environment that would let relatively ordinary web skills move around property inside of a secure global ecosystem, that work has been done. Although Ethereum is not yet a cakewalk to program, that’s largely an issue of documentation, training, and the gradual maturation of a technical ecosystem. The languages are written and are good: the debuggers take more time. But the heinous complexity of programming your own smart contract infrastructure is gone: smart contracts themselves are simpler than modern JavaScript, and nothing a web programmer will be scared of. The result is that we expect these tools to be everywhere fairly soon, as people start to want new services, and teams form to deliver them.
The Future?
Iam excited precisely because we do not know what we have created, and more importantly, what you and your friends will create with it. My belief is that terms like “Bitcoin 2.0” and “Web 3.0” will be inadequate — it will be a new thing, with new ideas and new culture embedded in a new software platform. Each new medium changes the message: blogging brought long form writing back, and then twitter created an environment where brevity was not only the soul of wit, but by necessity its body also. Now we can represent simple agreements as free speech, as publication of an idea, and who knows where this leads.
Ethereum Frontier is a first step: it’s a platform for programmers to build services you might access through a web browser or a phone app. Later we’ll release Ethereum Metropolis, which will be a web browser like program, currently called Mist, which takes all the security and cryptography inherent in Ethereum and packages it nicely with a user interface that anybody can use. The recent releases of Mist showcase a secure wallet, and that’s just the start. The security offered by Mist is far stronger than current e-commerce systems and phone apps have. In the medium term, contract production systems will be stand-alone, so nearly anybody can download a “distributed application builder” and load it up with their content and ideas and upload it — for simple things, no code will be required, but the full underlying power of the network will be available. Think along the lines of an installation wizard, but instead of setting up your printer, you are configuring the terms of a smart contract for a loan: how much money, how long, what repayment rates. Click OK to approve!
If this sounds impossible, welcome to our challenge: the technology has gotten far, far ahead of our ability to explain or communicate the technology!
The World SUPER Computer?
Weare not done innovating yet. In a little while — we’re talking a year or two — Ethereum Serenity will take the network to a whole new level. Right now, adding more computers to the Ethereum network makes it more secure, but not faster. We manage the limited speed of the network using Ether, a token which gives priority on the network etc. In the Serenity system, adding more computers to the network makes it faster, and this will finally allow us to build systems which really are internet scale: hundreds of millions of computers working together to do jobs we collectively need done. Today we might guess at protein folding or genomics or AI, but who’s to say what uses will be found for such brilliant software.
I hope this non-technical primer on the Ethereum ecosystem has been useful, and as soon as we have a user friendly version of the system available for general use, you’ll be the first to know!
Silicon Valley-backed meal kit provider to test $2B valuation in IPO
Meal kit provider Blue Apron filed on Thursday to go public in an offering which will test whether it can live up to its $2 billion private valuation.
The N.Y.C.-based business could get an edge over numerous competitors in the food and meal delivery industry with a successful offering. It has set a preliminary goal of raising $100 million in the offering.
Menlo Park-based Bessemer Venture Partners is the company’s biggest shareholder, with a nearly 24 percent stake. San Francisco-based First Round Capital is the next biggest venture stakeholder, owning about 10.5 percent of its shares. Blue Apron raised nearly $200 million in funding since it was founded in 2012.
The filing highlighted the losses and marketing costs of Blue Apron, which has shown signs of stalled growth.
Blue Apron’s net loss grew to $54.8 million last year. That’s a 16 percent increase from 2015, according to the New York Times. Earnings before interest, taxes, depreciation and amortization (Ebitda) show a loss of $43.6 million, up 32.5 percent from the year before.
Blue Apron’s marketing costs, as well as the rising cost of ingredients and a decreasing number of orders per customer, have also made business tough for the startup.
Still, Blue Apron maintains the title of “biggest player” in the sector, despite rising competition from the likes of Plated, also based in New York, and Berlin-based Hello Fresh, which operates stateside.
A trio of California purveyors in the meal-delivery space have sprung up as well, including San Francisco-based Sprig; Palo Alto-based Gobble, whose founder we featured in a podcast; and El Segundo-based Chef’d, which has partnered with magazines and Weight Watchers.
Maple, another rival, struggled and was eventually scooped up by Deliveroo, a UK-based food delivery company.
“One of the fundamental opportunities that we have is we’re able to take a lot of links out of the supply chain,” cofounder and CEO Matt Salzberg told the New York Business Journal after the company’s $135 million Series D announcement. “We can plan [customer meals] all the way back to the farm and take a lot of waste out of the system.”
Blue Apron plans to trade under the symbol “APRN.”
The company has tapped Goldman Sachs (NYSE: GS), Morgan Stanley (NYSE: MS), Citigroup Inc. (NYSE: C) and Barclays PLC to help with the IPO.
Anthony Noto is a multimedia journalist focused on venture capital and Silicon Alley startups. Based in New York for the Business Journals, he previously was a reporter at SourceMedia and The Deal LLC. He is a graduate of Rutgers University.