People keep asking me what I currently HODL, so here is my Cryptocurrency portfolio.
People keep asking me what I currently HODL, so here is my Cryptocurrency portfolio.
Brent Hoberman, co-founder of lastminute.com, has raised $60m to invest in technology start-ups across the region through his new seed fund.
Firstminute Capital is being backed by 23 founders of leading technology companies as well as an undisclosed cornerstone investment from Atomico, the London-based venture capital fund run by Niklas Zennström, the co-founder of Skype.
Investors include the co-founders of some of Europe’s most successful tech companies, including Skype, Supercell, Betfair, Trulia, Skyscanner, Net-a-Porter and BlaBlaCar. The fund has also drawn in several family offices and a clutch of corporate executives, including Lord Davies, former chairman of Standard Chartered bank.
Firstminute will aim to make investments of up to $750,000 in a range of tech companies across Europe, focusing heavily on “deep tech” businesses in the fields of robotics, artificial intelligence and the internet of things.
Some people are very vocal that Europe is far behind the US in developing its start-up sector and has been slow to realise the benefits of a digital single market. I disagree with this as we have had great successes such as JUST EAT, Transferwise and Zoopla amongst many others and we also have strong early stage investors such as LocalGlobe and Seedcamp, top tier venture capital is strong with the likes of Index Ventures etc, yes we are far from being rivals to Silicon Valley, but that is rapidly changing and London especially has demonstrated it's might.
Brent explains that Firstminute differs from other early stage funds in that it would be able to tap into such a broad community of supporting entrepreneurs to offer advice commenting “The main difference is that we can be extremely helpful to founders based on the record of what this group has achieved,” He also explained he was hopeful that London could continue to attract talented entrepreneurs from around the world “We have some real positives in London: second generation entrepreneurs, capital and corporates. We will have to do a lot wrong to stop that,” he said.
As well as co-founding lastminute.com, with Henry and Martha Lane Fox, Mr Hoberman co-founded Founders Forum, a private group of tech entrepreneurs and Founders Factory a venture builder working with corparates such as L'Oreal and Aviva.
Firstminute’s co-founder is Spencer Crawley who will become head of investments and has also been involved with Founders Factory.
Long employees of UBER are multi-millionaires, on paper. But with no sign of an IPO being imminent and a strict policy blocking most private share sales, they’re stuck in limbo land.
However, there’s a low key scheme available to UBER share holders who have worked there for at least four years, in that they can sell as much as 10 percent of their shares. A contact of mine at UBER told me (off the record) that the scheme has been designed as an incentive to encourage staff to not jump ship. The seller gets paid out over several months and must remain at UBER during that time, officially all UBER employees are not allowed to discuss the program externally. So my source asked to remain anonymous but told me that the scheme caps share sales at below $10 million per employee, with less than 200 of the 10,000 UBER employee shareholders currently qualify for the scheme. I was also told that this scheme is for GM/Country Manager level and above with those at that level having an average of $13m at current valuation, so in essence they will be getting around $1.3m in trickles, not exactly FU money in itself, but certainly a welcome injection of capital for the individual, who may invest it etc.
By working longterm in a startup, you are essentially embracing the scenario of short term pain for long term gain and often take pay cut, which you feel is mitigated by stock options or equity. However, UBER is quite unique in that because of the volume of capital they have raised an exit may still be years away, an IPO would require much work to improve the business pain points to create a positive prospectus that has the potential to achieve the market cap they need. I am not so sure that UBER will IPO before autonomous cars, but if they do, then it could still be years away, so you have an issue of employees who are very capable looking at other startups who have exits or even Google employees who have became seriously wealthy
With the exception of Snap (Who probably have no choice but to IPO) and a few others, tech companies are waiting much longer to IPO , and the huge funding available from private investors at favourable terms have made mergers a less attractive option. As a result, exits have been cut almost in half from their high in late 2015, according to the Bloomberg U.S. Startups Barometer, an index that tracks private markets.
The ride-hailing giant has raised more than $17 billion in cash and debt since its founding in 2009. It had more than $11 billion on its balance sheet as of June, the last time it disclosed the amount. But it’s spent aggressively since then, despite offloading its money pit in China to homegrown app Didi Chuxing.
Airbnb Inc. and Pinterest Inc. were founded around the same time as Uber, achieved valuations in excess of $10 billion and yet remain private. But the two companies have at times allowed their employees to sell shares to interested buyers and even facilitated some of those transactions. Uber, the world’s most valuable tech startup at $69 billion, has been more restrictive about who gets to buy its shares.
While Uber’s buyback approach can help it retain talent, it may also benefit the company’s bottom line. Using its own money, Uber purchases common stock for 25 percent to 35 percent less than the price of preferred shares from its most recent funding round, It can then turn around and sell shares at a premium in a subsequent round. Still, an employee who got their stock four years ago stands to make more than a 10-fold increase on the sale.
As some gold rushers watch their paper wealth soar, option holders face colossal tax bills on their stock, regardless of whether they can sell it. In late 2014, Uber began offering employees restricted stock units instead of options, which don't require paying taxes upfront, one of the people said. Employee demand to sell stock would be one factor that could ultimately motivate Uber to go public, but buybacks reduce some of the pressure.
In the video below Jinn riders confront one of the co-founders over unpaid wages....
I was sent Jinn's deck prior to their Series A raise, the numbers were shocking and that's why no top tier VC would touch them. They eventually closed $7.5m in April 2016 from a few credible VC's, between then Jinn have almost doubled their employee headcount and increased marketing spend, which no doubt means they have a much larger burn rate, so it would be interesting to know how much runway they have left.
What struck me most when I saw Jinn's deck, is not the low volume but more importantly they had low rider utilisation/efficiency and based on this video, it appears not much has changed in that regard, with too many riders but not enough work for them, that combined with a scaling payment model disengaging drivers, will no doubt mean unit economics and ARPR are still a pain point and raising further capital may well be a tall order.
In my opinion the well-funded startups like Deliveroo, UberEATS or Amazon Restaurants or the minnows like Quiqup and Jinn biggest issue is that their models are not designed to appreciate that their riders literally are their bread and butter, without them, there is no liquidity and without that, it's a matter of time before they crash and burn.
Deliveroo, for me stand out as being way ahead of the others named above, because they have more volume, but even they have a low efficiency and utilisation rate, meaning their riders moonlight to work for rival startups.
Granted, delivery and logistics are notorious for low margins and only only work at huge scale but that will only be achieved by working liquidity, otherwise scaling to the point it becomes profitable may well prove impossible.
You MUST create symbiotic partnerships w/ your riders and restaurants, it needs to make sense for them financially, and if you pay them how you want them to behave you will build trust, loyalty and avoid nasty confrontations outside your HQ, which obviously doesn't send great signals to potential investors.
A plumber who signed an agreement with his company suggesting that he was self-employed was in fact entitled to some worker rights, according to the Court of Appeal in Pimlico Plumbers Ltd and another v Smith.
The judgment has important implications for so-called “gig economy” employers that claim their workers undertake services on a self-employed basis and that they effectively run their own businesses.
Mr Smith worked as a plumber for Pimlico Plumbers from 2005 until 2011. The agreement between the company and Mr Smith described him as a “self-employed operative”.
The wording of the contract suggested that he was in business on his own account, providing a service to Pimlico Plumbers.
Mr Smith was required under the contract to wear Pimlico’s uniform (which displayed the company’s logo), use a van leased from Pimlico (with a GPS tracker and the company’s logo), and work a minimum number of weekly hours.
However, he could choose when he worked and which jobs he took, was required to provide his own tools and equipment, and handled his own tax and insurance.
There was no express term in the agreement allowing Mr Smith to send someone else to do the work. However, there was evidence that plumbers could swap jobs, described as “more akin to swapping a shift between workers” than substitution.
Pimlico Plumbers did not guarantee to provide Mr Smith with a minimum number of hours. Following the termination of this arrangement, Mr Smith brought claims for unfair dismissal and disability discrimination.
The employment tribunal found that he could not claim unfair dismissal because he was not an employee.
However, the tribunal decided that he could claim disability discrimination as a “worker”, whereby an individual undertakes to do or perform personally any work or services for another party to the contract.
The Employment Appeal Tribunal (EAT) agreed with the employment tribunal, and the Court of Appeal has now dismissed Pimlico Plumbers’ appeal.
In dismissing the appeal, the Court accepted that the original employment tribunal had been entitled to stand back and looked at the arrangement as a whole.
According to the Court, the employment tribunal had been right to regard Mr Smith as “an integral part of [Pimlico Plumbers’] operations and subordinate to [Pimlico Plumbers]”.
The employment tribunal was entitled to regard Pimlico Plumbers as more than just a “client or customer of Mr Smith’s business”.
Unlike recent high-profile judgments involving Uber drivers and CitySprint couriers, this ruling is binding on other courts and tribunals.
This means that the Court of Appeal decision in Pimlico Plumbers Ltd and another v Smith is likely to be a key authority in any forthcoming cases on employment status in the gig economy.
After the ruling, Charlie Mullins, founder and chief of Pimlico Plumbers, said the company had changed contracts with those who worked on a self-employed basis. “Like our plumbing, now our contracts are watertight,” he said.
Glenn Hayes, an employment partner at Irwin Mitchell, said: “We are seeing increasing numbers of individuals challenging their status and claiming to be workers or employees.
“CitySprint couriers and Uber drivers recently persuaded separate tribunals that they were workers and although Uber is now appealing this, tribunals are clearly taking a pragmatic and bold approach to determining status cases, despite contractual arrangements which are designed to give the appearance that individuals are genuinely self-employed.
“The outcome of this case is very significant and could make it more difficult for Uber and others to persuade the courts that its drivers are genuinely self-employed.”
Yvonne Gallagher, employment partner at law firm Harbottle & Lewis, said it was important to note that this case did not find that the plumber was an employee of Pimlico Plumbers.
“Those categorized as workers have a right to minimum wage and to paid annual leave, along with some other procedural rights, such as a right to be accompanied at any form of disciplinary meeting,” she explained.
“But they do not enjoy the full range of protections given to employees and perhaps as importantly, are not subject to the PAYE system applicable to employees.”
However the judgment included a warning to commentators: “Although employment lawyers will inevitably be interested in this case – the question of when a relationship is genuinely casual being a very live one at present – they should be careful about trying to draw any very general conclusions from it.”
General secretary of the TUC Frances O’Grady said: “This case has exposed once again the growing problem of sham self-employment.
“Unscrupulous bosses falsely claim their workers are self-employed to get out of paying the minimum wage and providing basics like paid holidays and rest breaks.”
Uber is appealing against the high-profile employment tribunal decision that the drivers who brought the claim are workers rather than self-employed.
This means that they are entitled to receive some basic employment rights, such as the national minimum wage and paid annual leave.
A similar finding when the Uber case goes to the EAT would be bad news for the company, as it could lead to it having to radically overhaul its contractual arrangements with its drivers.
In another recent case about employment status in the gig economy, the employment tribunal found that a CitySprint courier is a worker rather than self-employed.
In both cases, the employment tribunals were highly critical of the contracts that the workers were asked to sign.
The employment tribunals saw the contracts as drafted in a deliberately complex manner to mask the true nature of the working arrangements.
There are also a number of other outstanding legal challenges with courier companies including Hermes, Addison Lee, Excel and eCourier.
The Government is currently conducting a review into workers’ rights in the gig economy, led by Matthew Taylor, chief executive of the Royal Society for the Arts.
Snapchat filed for an initial public offering and outlined its dependence on Google Cloud as a risk factor.
Snapchat is running its business on Google Cloud and has agreed to spend $2 billion over the next five years with the search giant.
The terms of Snapchat's cloud spending with Google was detailed in the company's initial public offering filing.
In that Securities and Exchange Commission filing, Snapchat cited Google Cloud as a risk factor. Snapchat said "we currently run the vast majority of our computing on Google Cloud."
Meanwhile, Snapchat said a move away from Google Cloud would result in "significant time and expense." Snapchat said:
We have committed to spend $2 billion with Google Cloud over the next five years and have built our software and computer systems to use computing, storage capabilities, bandwidth, and other services provided by Google, some of which do not have an alternative in the market. Given this, any significant disruption of or interference with our use of Google Cloud would negatively impact our operations and our business would be seriously harmed. If our users or partners are not able to access Snapchat through Google Cloud or encounter difficulties in doing so, we may lose users, partners, or advertising revenue. The level of service provided by Google Cloud may also impact the usage of and our users', advertisers', and partners' satisfaction with Snapchat and could seriously harm our business and reputation. If Google Cloud experiences interruptions in service regularly or for a prolonged basis, or other similar issues, our business would be seriously harmed. Hosting costs will also increase as our user base and user engagement grows and may seriously harm our business if we are unable to grow our revenues faster than the cost of utilizing the services of Google or similar providers.
What makes Snapchat's comments interesting is that it's among the first to be all-in with Google Cloud. Amazon Web Services regularly touts companies that have gone all-in.
Snapchat said that Google could increase pricing terms, establish relationships with rivals, or modify the service.
Meanwhile, Snapchat noted Google and its YouTube unit are competitors in the advertising market.
The Google Cloud dependency is a risk factor, but in the grand scheme of the IPO filing, it's a smaller issue. Snapchat cited inexperience in the hardware market (Spectacles), maintaining daily active user growth and competition with Facebook, Google, and others as risk factors.
For 2016, Snapchat lost $514.6 million on revenue of $404.48 million. In 2015, Snapchat lost $372.9 million on revenue of $58.66 million.
Michael Moritz—chairman of Sequoia Capital and one of the most successful venture capitalists in history—says a simple vision led him to invest hundreds of millions of dollars in on-demand delivery startups.
“The movement of goods and services and people, by easier, more convenient means,” he said in an interview. “That’s a huge trend, enabled by smartphones.”
—Kleiner Perkins Caufield & Byers— and other well-respected VC's have collectively invested >$9 billion into 125 on-demand delivery companies over the past decade, including $2.5 billion last year, according to a Reuters analysis of publicly available data.
But that flood of money has slowed to a relative droplet in the last half of last year, and with many VCs appearing to have lost faith in a sector that once seemed like the obvious extension of the success of ground transportation beasts such as UBER, it is also seeming likely there is a "lack of appetite" (excuse the pun) from current investors to invest in follow-on rounds to protect their position and avoid dilution, which obviously doesn't exactly make the cap tables for new investors very attractive.
The majority of last year’s investment—about $1.9 billion—came in the first half of the year. Only $50 million had been invested up until the fourth quarter, the Reuters analysis found. Several prominent Silicon Valley venture capitalists said in interviews that they now believe many delivery startups could fail, leaving investors with big losses.
“We looked at the entire industry and passed,” said Ben Narasin, of Canvas Ventures. “There is more likely to be a big, private equity-style roll up than a venture-style outcome.”
Reuters analyzed investment in on-demand delivery startups using publicly available data from the companies, their backers and third-party websites such as Crunchbase and MatterMark. Inevitably the data isn't scientific and doesn't include some investments made by private firms and individuals who do not always disclose investments.
Delivery startups continue to fight it out, with fierce competition, tiny margins and a host of operating challenges that have defied easy solutions or economies of scale, venture capitalists told Reuters. Widespread discounting and artificially low consumer prices have made on-demand delivery “a race to the bottom,” said Kleiner Perkins partner Brook Porter in an interview. His firm has previously backed U.S. based startups DoorDash and Instacart and China-based Meican.
2016 saw high-profile failures, including U.S. meal delivery firm SpoonRocket, which shut down in March, and PepperTap, an Indian grocery delivery service backed by Sequoia that shut down in April. Elsewhere, DoorDash, another of Moritz’s investments, was able to close its latest VC Investment round in March only by cutting the value of its share price by 16%, according to data from CB Insights.
The entry of uberEATS and Amazon into food delivery promised to make life more difficult for smaller startups but is still yet to happen.
Sequoia has backed at least 14 local delivery firms, among them four in the United States, five in China and four in India. Sequoia did not respond to Reuters requests for a response to rising VC skepticism of delivery firms.
Venky Ganesan, of Menlo Ventures, said the sector has no clear way to cut costs or boost revenue.
“You can’t raise prices on consumers, and you can’t cut labor costs,” he said. “The core unit economics didn’t make sense.”
Dalton Caldwell, a partner at Y Combinator—the prestigious tech incubator that birthed a number of delivery startups—was also skeptical, though he thought companies with efficient operational capability could succeed.
Many delivery startups, he said, “make the assumption that once you get bigger, things will get easier, and that’s wrong. There is driver churn, operations people that cost money, more support costs."
Focusing on food and alcohol delivery, DoorDash has agreements with local restaurants, including franchised outlets, in dozens of cities in the U.S. and Canada. But Doordash still has various operational challenges.
“There is an opportunity to redefine local commerce in cities,” says DoorDash co-founder Stanley Tang. “But we have to figure out, what are the operational challenges, and then how we can scale it up.”
I was part of the team that built JUST EAT from startup in the UK and I have since advised various companies in the global #FoodTech space, as well as top tier Venture Capital and Private Equity firms (who have already invested and others who are looking at potentially investing) so I know the space well.
My opinion is that the on-demand food delivery players are nowhere near the scale of JUST EAT, especially here in the UK, where combined they don't even get 10% of the monthly orders JUST EAT receive and whilst the common perception is that on-demand delivery has an AOV which is far greater than JE, the fact is that in many geographic areas (hyperlocal and in some cases across entire cities and countries) - this is far from the case, this combined with a chronic lack of scale, results in terrible unit economics and unsustainable burn rates.
uberEATS is treated as a startup within UBER with very little investment poured in from them thus far, inevitably resulting in limited success. I know they are specifically struggling around acquiring customers and riders, except when they give unsustainable referral credit. This expensive acquisition model works well in ground transportation for UBER (because they have raised substantially with that in mind) but it will not work in food - for anyone, as margins are notoriously low and even a viable financial model requires huge scale and efficiency.
Deliveroo had VC's clambering to invest when they proved their model in two London boroughs, Kensington and Chelsea and Westminster, the two boroughs have tons of restaurants and residents with high disposable incomes, so naturally the AOV and Unit Economics were impressive enough to attract top tier investors. There are lots of things I admire and respect about Deliveroo, such as a really talented team, rapid expansion, key partnerships, and great technology, sadly for them, their huge potential hasn't been matched commercially and their initial model hasn't worked in new markets. Up until their latest raise, "the word on the street" was they were burning £8m per month and could only close £100m of their £200m raise as their largest investor from the previous rounds, declined to participate and avoid the inevitable dilution. There were also rumors that Deliveroo had hired an investment bank to explore the potential of an M&A, I have no idea how much truth was in that, but in the end, they took investment from Private Equity firm Bridgepoint - As many in the investment world will know PE money differs greatly from Venture Capital and it is a risky tactic for a company with operational and scalability question marks. It is also risky as PE money is almost always linked to controlling voting rights and liquidity preferences.
Amazon entered the market last year and have also struggled to scale for many of the same reasons as uberEATS and Deliveroo.
The fundamental issues shared by all are user, restaurant and rider acquisition and the liquidity between these.
Riders simply aren't getting enough work from any of them and have very little loyalty, with many viewing this as a perfect storm to manipulate for financial gain e.g taking an hourly rate from uberEATS whilst moonlighting and being paid per drop by Deliveroo.
This has resulted in poor rider efficiency/utilization and often riders sitting around doing nothing is a common sight, even at peak times such as Friday and Saturday nights.
In my JUST EAT and GETT days whenever I was speaking with partners, I regularly used the phrase "If the wheels aren't turning, we aren't earning" which I think is also very relevant in this case, except when the wheels of the on-demand food startups riders aren't turning, the startups runway is burning.
All three on demand players are accounting for a tiny amount of the UK food delivery market and the "word on the street" in the investment world, is that these are very much distressed assets, so I wouldn't be surprised that within the next 18 months, we see M&A's and in some cases even shutting down happening.
I am sure in 2015 there were one or two VC's considering which Porsche they would buy with their carry from their deals in the space, but if the climate of uncertainty is the case, their funds potentially stand to make losses, so the Porsche may have to wait,
That is not to say I do not think there aren't opportunities within the on-demand food space and I would honestly like to see Deliveroo in particular, do well as its good for the UK tech startup ecosystem. So I actually would smile and congratulate them if they prove me wrong.
I also know of a startup in stealth, who I believe could have potential to tackle the white space and large opportunity that the likes of Deliveroo, uberEATS and Amazon have so far been unable to or it seems are unlikely to ever capitalize on due, to the way they are structured.
I will keep the stealth startups identity under my hat, as they are still very early but I am told they are growing 25% week on week and are revenue generating, so It will be observing their growth with interest.
Google employees in eight offices around the world staged a walkout on Monday afternoon in protest of President Donald Trump’s executive order banning immigration in seven Muslim-majority countries. Using the hashtag #GooglersUnite, employees tweeted photos and videos of walkout actions around the world, including at headquarters in Mountain View.
The walkout came after employees donated more than $2 million to a crisis fund that will be distributed among nonprofit groups working to support refugees. Google match employees’ donation with $2 million.
Stanford researchers say they've created a new artificial intelligence system that can identify skin cancer, as well as trained doctors can. According to a study, they published in science journal Nature, the program was able to distinguish between cancerous moles and harmless ones with more than 90 percent accuracy.
The researchers trained the system by feeding it nearly 130,000 images of moles and lesions, with some of them being cancerous. The system scanned the images pixel by pixel, identifying characteristics that helped it make each diagnosis. Using machine learning, the A.I. grew more accurate as it studied more samples.
It then went head to head with 21 trained dermatologists. The result: The A.I. software achieved "performance on par with all tested experts." The system correctly identified 96 percent of the malignant samples, and 90 percent of the (generally harmless) benign ones. For the doctors in the study, those numbers were 95 percent and 76 percent, respectively.
This could have huge implications: The study points out that 5.4 million new cases of skin cancer are diagnosed each year in the U.S. alone. If installed in smartphones, the authors say, this technology could provide a simple, low-cost form of early detection.
Identifying melanoma early on is critical. The five-year survival rate when the cancer is caught in its earliest stages is 99 percent. That number drops to 14 percent when detected in its late stages. Having the equivalent of a dermatologist--as far as diagnosing goes--in your pocket could help patients keep a closer watch on their own skin and seek medical treatment sooner.
That's not to say dermatologists will be replaced--they'd still be the ones to perform any procedures necessary. And in a blog post on Stanford's website, the authors suggest doctors might use the tool for in-office diagnoses.
Before the system can achieve its potential, though, it will have to be able to detect cancer from images captured by smartphones. While phone cameras are rapidly improving, the A.I. is currently trained to work only with high-quality medical images.
Still, the technology is moving in that direction. Being able to detect early could have an impact on the 10,000 people who die from skin cancer each year in the U.S. alone.
The Stanford researchers developed the framework for the A.I. system using an image classification algorithm that had previously been built by Google.
I have been a speaker at startup, technology and investment events, I also have judged startup competitions.
You realise you have provided good insights, contribution or projected value at these events, when after you step off the stage, you have a flock of people who charge towards you, armed with their business cards, passionately explaining who they are and what they do. I am a very active networker, so understanding how important it is for entrepreneurs to connect with they feel are the people who could give advice, open doors or make a difference (and having been in their shoes myself) I always invest time after these events, to exchange business cards and hear them out. I'm always willing to help entrepreneurs and our eco-system grow and take value from speaking to them equally as much.
I actually believe that one of the absolute motivators for people spending £100's on tickets to startup/technology events, summits and expos is purely for the networking opportunities that they could potentially present.
The problem is, even with a willing speaker who is giving his time its not always possible or practical for everyone to get an opportunity to speak one to one or even exchange business cards. Which is especially the case for those who are globally recognised. So what alternatives are there to get in contact with people? How about you pay for it?
Today, the technology and investment legend that is "Ben Horowitz" tweeted he had created a profile at the chttps://21.co/bhorowitz/ in which he is giving people an opportunity to contact him for $5, now many would say "wow, I can't believe someone has the balls to charge $5 simply to contact them" however this is not for financial gain, nor in the slightest bit egotistical, all money raised will go to Black Girls Code which is an amazing organisation. Their vision is to increase the number of women of colour in the Digital Space by empowering girls of color aged between 7 and 17 to become innovators in STEM fields, leaders in their communities and builders of their own futures through exposure to computer science and technology. Their ultimate goal is to train 1 million girls by 2040.
I am sure Ben's email inbox is an absolute party (given his high profile, through his work, Book and investment fund) which even with the aid of a PA, is no doubt a neverending task to stay on top of, so hats off to Ben for helping raise funds for an amazing organisation. Although I have no doubt many of the people who would pay £100's for a ticket at an event to see him speak, will see $5 as a steal, so he may well have to raise his price further benefiting Black Girls Code :)
So, for those of you who would like to reach out to Ben, for the price of a Large cup of coffee, you can do just that and more importantly help raise funds for Black Girls Code. #NoBrainer
As the father of two little girls of colour and also being a huge supporter of equality for all, I love what Black Girls Code are doing. I have worked in startups for a decade and black people are hugely underrepresented, especially black women, so it's a really positive initiative that I support and one which that I believe is also needed here in the UK.
Ps... if you are a startup founder and haven't read Bens book "The Hard Thing About Hard Things" shame on you.... its a must read and a book which covers pretty much every challenge that you would face, so buy it now https://goo.gl/6ctwju
When Linkedin was launched in 2003, They had just little over 20 people working for them, but by the end of 2010 they grew to a staggering 1000 employees in 10 countries and boasting 90,000 users. The very next year, they hosted the President of the United States and got listed on the NYSE. On June 2016, Microsoft’s acquisition of LinkedIn valued $26.2 billion dollars, sent shockwaves across the world.
LinkedIn recently underwent a major User Interface Design revamp of its desktop site which according to them, was done from scratch.
“Largest LinkedIn.com redesign since the company’s inception.”
From the language of it, we were all expecting groundbreaking, revolutionary design and features from the makers of the “Modern UI”.
And then, we got crushed. Microsoft and LinkedIn combined are supposed to be much, much bigger than Facebook. Then, Why on Earth would they want to copy so many design elements from their rival? Let’s compare the similarities between LinkedIn’s UI and Facebook’s.
What strikes me first is the “Main feed” (aka Facebook’s Wall), the page you start off with after logging in. LinkedIn’s new design encourages it’s users to post or share a content by having a similar, more prominent box as Facebook. Their upgraded “news feed” algorithm displays more relevant information. You can easily unfollow and hide posts which don't interest you. It is not bad that they did this, what is bad is the way they stole the design format straight from Facebook without any alterations.
If you notice the screenshot below, the similarities are striking. They managed to copy the entire design layout, information hierarchy, the shape of the profile pic, how last received messages shown of the list, compose message icon being exactly in the same place, and even the way the message text is highlighted. Thank God at least they changed the colour of the highlight to match their brand’s.
LinkedIn’s previous “People You May Know” was horrendous, I'm really glad they managed to improve on it and now display it as a list. I can still recall how we had to endlessly shift through the large pictures of random smiling people, looking for that one person I actually want to connect with. Sadly though, the design thought process here doesn’t scream “ground up” at all. It is just a carbon (or design) copy of the Facebook’s very own “people you may know” section.
We all wanted to see something happening after the click or enter button being pressed. The content outline would load before the actual content, giving us a rough idea what to expect, and most importantly making the user wait until the content loads. Copying this feature directly from Facebook and other places has made them appear like a replicator not innovator.
LinkedIn’s redesign was much anticipated. The design community were excited to find out what amazing ideas LinkedIn had come up with, or what kind of new groundbreaking stuff we were going to see. But in the end, all we got was essentially a Facebook version 2.0.
I love Linkedin and believe in their product, to the extent that almost every business opportunity that comes my way is via Linkedin and have been a loyal, paying customer of for a decade but it seriously feels this change was not about putting users first and content at the forefront but instead highlighting Linkedin's other revenue generating products.
When I saw the Microsoft acquisition of Linkedin, my gut instinct was that this means the product will become purely a business unit and lose the magic of what made it what it grow. It seems that may well be the case, but I hope it is not!
I totally understand that LinkedIn didn’t want to risk coming up with something unique or radical with its precarious situation on the market. What I don’t understand is why would they mislead us into thinking they are designing something awesome when it is just another “Ctrl+c, Ctrl+v” job? My review in terms of design? One word: Disappointed!
With the growth of the Internet, the music industry (as with most industries) has been disrupted and changed dramatically, with the likes of torrent sites enabling bootlegging and the advent of Apple, Spotify et all creating a layer between them and the industry, musician's simply do not have the ability to make the money from music sales as did the likes of the Beatles, Rolling Stones and Michael Jackson
To survive music artists have had to be more agile, evolve and find innovative ways to monetise their talents, to stay relevant and create new revenue streams.
I'm eclectic but have been a big fan of Hip Hop since I was a child, in those days Hip Hop was in many ways a controversial and taboo genre because of its subject matter and generally gritty content meaning that many in the mainstream didn't give it the outlet (or recognition as an art form) that it deserved. However in my opinion by being ostracised from the mainstream, that only further added fuel to its popularity and coolness.
I remember as a child in the mid nineties sneaking downstairs late at night to watch YO! MTV Raps, I remember listening to Tim Westwood (cringe) on BBC Radio 1 and frantically pushing paper into the top of Cassette tapes to be able to record the latest songs, I remember standing in HMV and Virgin Megastore to listen to double discs albums that cost >£20 and naturally wanted to ensure that If I purchased I would be getting value for money.
The difference was back then with limited influencers and media platforms, Record companies could make or break an artist as it was their "machine" that built the awareness needed to be commercially successful.
These days things are much different and Rappers now are pretty much entrepreneurs, needing record companies (and their 360 management deals) less and less.
So as Rappers are becoming much more entrepreneurial here are 3 things startup entrepreneurs can learn from successful Rappers...
If you look at the recent years of hip-hop, a big change has been an increase in collaboration between rappers and musicians of other genres, they do this to cross-pollinate their respective fan bases, and it is "win, win" for both of the artists, because not only do they increase their exposure but they will also likely increase revenue opportunities, through bookings for festivals, merchandise and endorsement deals etc.
If you are building a startup It is essential to be constantly looking for opportunities to collaborate and create innovative and strategic partnerships with other companies. However, genuine partnerships should not be treated as client/supplier because eventually, that becomes parasitic and breaks. Partnerships should be symbiotic, it's about clearly communicating and strategising ways to help each other. In doing so you will create a strong relationship, which is one of integrity and trust, so guess what, when your competitors want to copy or even intercept your partnership, it may not be very easy to do so. As we know replication of product is relatively easy, but replication of relationships is a much bigger task.
What is one of the common denominators between Jay Z, Nas, Dr Dre etc? Well, all of them came from very humble beginnings and built huge success, be it creating record companies, leading headphone companies or becoming venture capitalist they have excelled.
I believe this has a lot to do with their backgrounds, when people are faced with living a life which is sink or swim, you learn how to not only survive but maximise every opportunity you get to thrive, which I feel creates a good foundation of skills and qualities which are transferable to entrepreneurship.
This is not to say that you have to grow up in difficult circumstances to be able to build a startup because a startup *is* a difficult circumstance, most will fail within the first year and only a tiny percentage that survives go on to become anything more than lifestyle businesses or side projects.
To survive, you absolutely need the will to win and if you want to thrive badly enough, you will discover and maximise every opportunity which is available to you and create much more for yourself.
I can think of several examples of rappers who portray the image of “mobsters” and their subject matter is worthy of a Narcos-esque series on Netflix, however, the discrepancies in the reality of their escapades are common knowledge, but that hasn’t stopped them becoming hugely successful. Why? Because people like them.
We all know there is huge power and value in storytelling/authenticity, however, unless you have personality, charisma and the ability to communicate this in a way which resonates with people - nobody will care and if likewise if you do have charisma in many cases they may even overlook your authenticity - for no other reason than they like you.
In startups, I believe all too often leaders overlook the value of personality and charisma, not just in their leadership team but also in terms of brand and identity. If you are a disruptive FinTech startup, nobody is going to view your concept as anything new and exciting if you act and brand yourself like the corporate organisations that you are trying to disrupt. However, you can’t then go to the other extreme and portray an image that is too edgy and means you will lack the credibility and more importantly trust needed to win potential users.
This is much the same with leaders If a leader has personality, charisma and the ability to communicate to a high level in my opinion they will be a stronger leader than those who lack these qualities.
So its a case of ensuring you utilise the strengths of your leadership and wider team to the responsibilities they are best at.
All entrepreneurs need to constantly learn and grow and if taking a leaf out of Rappers play book may just teach them a few lessons.
When you're trying to get a startup off the ground, one thing is for sure you will be hearing lots of "No's" be it investors, potential partners and a variety of other people. So ask yourself one question...
How bad do you want it?
If you truly believe in what you are doing, don't become disillusioned or lose faith, hold the vision, trust the process and kick ass anyway!
You can overcome anything you put your mind to....
Once people see how much you believe in what you are doing, your ability to execute and the progress you are making, you will be sure to hear a substantial amount of those "no's" changing to "yes's". You just have to want it badly!
On that note,
Any seasoned entrepreneur knows that starting a company takes much more than just a great idea. Creating a successful business is a careful exercise in timing, planning, and tenacity. Like setting up a circuit, all the right pieces need to snap into place in order for the lights to come on. The right technical minds need to encounter the right business minds at a time when the world is ready for—and ready to pay for—a particular innovation.
The following 5 startups have managed to bring all the right pieces together addressing big problems, innovating and disrupting huge industries.
As always, its important to remember the outsized failure rate for start-ups. Far more fold than flourish. That said, these startups have made great progress in 2016 and created very strong foundations to build on, not only having a good shot at beating the odds, but are very much so ones to watch in 2017
Founded in 2015 by Ishaan Malhi, a former real estate and structured finance analyst at Bank Of America Merrill Lynch, and Jonathan Galore, an experienced fintech entrepreneur, Trussle is an online mortgage advisor which is fully regulated by the FCA and already manages £500m+ of mortgages
Kings Cross based Trussle have gained investment from renowned investors such as Seedcamp and LocalGlobe and secured strategic partnerships with the likes of Zoopla.
Trussle's technology provides transparency and simplicity which empowers first-time and existing homeowners to secure a great-value mortgage online, saving them time and money, they then continue to monitor their mortgage for free and help them to switch to a better deal later on, so they never pay more than they should and ultimately making home ownership more affordable.
You can find out more about Trussle - here
Founded in 2014 by CEO Nicholas Katz (who has extensive experience in commercial real estate) and CTO Vasanth Subramanian (who comes from a software development background) Splittable empowers housemates to manage and share household expenses.
Splittable's is available in 36 countries and has tens of thousands of users, who view Splittable as an essential way to avoid arguments over who has spent what and empowers them to pay back housemates in seconds.
East London based Splittable has gained investment from the likes of Seedcamp, London Co Investment fund and various other renowned Angel Investors.
You can find more out about Splittable - here
Founded in 2014 by Viraj Ratnalikar, Car Quids enables brands to advertise on personal vehicles across the UK, Car Quids propriety technology provides unprecedented targeting, insight and measurement of this unique outdoor advertising opportunity.
Drivers get paid each month, earning money for driving as they normally do, this new income substantially reduces the cost of car ownership or leasing.
South London based Car Quids gained a place on the prestigious Seedcamp program, with Seedcamp also investing alongside several other well known investors.
Car Quids are growing quickly and hare now working with many famous brands.
You can find out more about Car Quids - here
Echo, a West London-based medication management startup, launched with a £1.8 million seed investment round led by venture capital firm LocalGlobe and supported by Global Founders Capital.
Co-founded by former Apple executive Dr Sai Lakshmi and former LloydsPharmacy executive Stephen Bourke, Echo promises to make life easier for people on long-term medication by “removing the hassle of repeat prescription management”.
You can find out more about Echo - here
Amazon has filed a patent for massive flying warehouses equipped with fleets of drones that deliver goods to key locations.
Carried by an airship, the warehouses would visit places Amazon expects demand for certain goods to boom.
It says one use could be near sporting events or festivals where they would sell food or souvenirs to spectators.
The patent also envisages a series of support vehicles that would be used to restock the flying structures.
The filing significantly expands on Amazon's plans to use drones to make deliveries. Earlier this month it made the first commercial delivery using a drone via a test scheme running in Cambridge.
In the documents detailing the scheme, Amazon said the combination of drones and flying warehouses, or "airborne fulfilment centres", would deliver goods much more quickly than those stationed at its ground-based warehouses.
Also, it said, the drones descending from the AFCs - which would cruise and hover at altitudes up to 45,000ft (14,000m) - would use almost no power as they glided down to make deliveries.
Many firms working on drones are struggling with ways to extend their relatively short range, which is typically dependent on the size of the battery they carry.
The patent lays out a comprehensive scheme for running a fleet of AFCs and drones. It suggests smaller airships could act as shuttles taking drones, supplies and even workers to and from the larger AFCs.
Amazon has not responded to a request for comment about the patent.
It is not clear whether the filing is a plan for a project that will be realised or just a proof-of-concept. Many firms regularly file patents that never end up becoming real world products or services.
Amazon's patent was filed in late 2014 but has only now come to light thanks to analyst Zoe Leavitt from CB Insights who unearthed the documents.
Elon Musk didn’t end up starting diverse startups like Paypal, Tesla and SpaceX by not being proactive. And if Tesla’s latest product update is any indication, Musk remains as entrepreneurial as ever in his role as Tesla CEO.
On 11th December, a Twitter user sent Elon Musk this tweet.
Loic Le Meur, who’s an entrepreneur himself, was complaining about the lack of available slots at Tesla supercharger slots. He said that users were using Tesla’s superchargers as free parking spots, leaving their cars on the chargers even after they were fully charged.
Now Musk has 6.42 million Twitter followers, and receives thousands of such tweets per day. But he immediately responded to this tweet, saying he’ll “take action.”
Tesla was going to charge $0.40 for every minute a fully charged Tesla would stand at its parking stations after a five minute grace period. This simple change would ensure that people wouldn’t leave their cars at parking stations, preventing others from using them.
And what’s incredible is the pace at which the product change was implemented. Tesla might still call itself a startup, but it hardly is one – it has over 30,000 employees, and large engineering teams. To have a product feature conceptualised, implemented and shipped in a week is nothing short of miraculous.
But then Musk also sends rockets into space, so the normal rules don’t apply to him, really.
Connect with and Follow Me!
Biz shared his amazing journey one that is truly inspirational! (I strongly recommend the purchase of his book "Things a little bird told me"
I took an immense amount of value from the day, learned a huge amount and really resonated with Biz's background, it reminded me a lot of my own.
The 4 key things I learned from Biz were;
We also discussed Peace Tech and the work that my team and I are doing to help build the Technology/ Start Up eco system in Northern Ireland.
I was born in Northern Ireland and left at 4 years old, but my heart has never left!
22 years in to the peace process, Northern Ireland is a very complex place, the best way to describe this is that still waters run deep and there is still very much to do to create cohesion and collaboration.
"At Peace Tech we believe that the key to sustainable peace in Northern Ireland is a stronger economy and technology will be the key to achieving this"
Our mission is simple:
"To use technology to create unity and opportunities for all in Northern Ireland"
Google's monolithic repository provides a common source of truth for tens of thousands of developers around the world.
Early Google employees decided to work with a shared codebase managed through a centralized source control system. This approach has served Google well for more than 16 years, and today the vast majority of Google's software assets continues to be stored in a single, shared repository. Meanwhile, the number of Google software developers has steadily increased, and the size of the Google codebase has grown exponentially (see Figure 1). As a result, the technology used to host the codebase has also evolved significantly.
This article outlines the scale of that codebase and details Google's custom-built monolithic source repository and the reasons the model was chosen. Google uses a homegrown version-control system to host one large codebase visible to, and used by, most of the software developers in the company. This centralized system is the foundation of many of Google's developer workflows. Here, we provide background on the systems and workflows that make feasible managing and working productively with such a large repository. We explain Google's "trunk-based development" strategy and the support systems that structure workflow and keep Google's codebase healthy, including software for static analysis, code cleanup, and streamlined code review.
Google's monolithic software repository, which is used by 95% of its software developers worldwide, meets the definition of an ultra-large-scale4 system, providing evidence the single-source repository model can be scaled successfully.
The Google codebase includes approximately one billion files and has a history of approximately 35 million commits spanning Google's entire 18-year existence. The repository contains 86TBa of data, including approximately two billion lines of code in nine million unique source files. The total number of files also includes source files copied into release branches, files that are deleted at the latest revision, configuration files, documentation, and supporting data files; see the table here for a summary of Google's repository statistics from January 2015.
In 2014, approximately 15 million lines of code were changedb in approximately 250,000 files in the Google repository on a weekly basis. The Linux kernel is a prominent example of a large open source software repository containing approximately 15 million lines of code in 40,000 files.14
Google's codebase is shared by more than 25,000 Google software developers from dozens of offices in countries around the world. On a typical workday, they commit 16,000 changes to the codebase, and another 24,000 changes are committed by automated systems. Each day the repository serves billions of file read requests, with approximately 800,000 queries per second during peak traffic and an average of approximately 500,000 queries per second each workday. Most of this traffic originates from Google's distributed build-and-test systems.c
Figure 2 reports the number of unique human committers per week to the main repository, January 2010-July 2015. Figure 3 reports commits per week to Google's main repository over the same time period. The line for total commits includes data for both the interactive use case, or human users, and automated use cases. Larger dips in both graphs occur during holidays affecting a significant number of employees (such as Christmas Day and New Year's Day, American Thanksgiving Day, and American Independence Day).
In October 2012, Google's central repository added support for Windows and Mac users (until then it was Linux-only), and the existing Windows and Mac repository was merged with the main repository. Google's tooling for repository merges attributes all historical changes being merged to their original authors, hence the corresponding bump in the graph in Figure 2. The effect of this merge is also apparent in Figure 1.
The commits-per-week graph shows the commit rate was dominated by human users until 2012, at which point Google switched to a custom-source-control implementation for hosting the central repository, as discussed later. Following this transition, automated commits to the repository began to increase. Growth in the commit rate continues primarily due to automation.
Managing this scale of repository and activity on it has been an ongoing challenge for Google. Despite several years of experimentation, Google was not able to find a commercially available or open source version-control system to support such scale in a single repository. The Google proprietary system that was built to store, version, and vend this codebase is code-named Piper.
Before reviewing the advantages and disadvantages of working with a monolithic repository, some background on Google's tooling and workflows is needed.
Piper and CitC. Piper stores a single large repository and is implemented on top of standard Google infrastructure, originally Bigtable,2 now Spanner.3 Piper is distributed over 10 Google data centers around the world, relying on the Paxos6 algorithm to guarantee consistency across replicas. This architecture provides a high level of redundancy and helps optimize latency for Google software developers, no matter where they work. In addition, caching and asynchronous operations hide much of the network latency from developers. This is important because gaining the full benefit of Google's cloud-based toolchain requires developers to be online.
Google relied on one primary Perforce instance, hosted on a single machine, coupled with custom caching infrastructure1 for more than 10 years prior to the launch of Piper. Continued scaling of the Google repository was the main motivation for developing Piper.
Since Google's source code is one of the company's most important assets, security features are a key consideration in Piper's design. Piper supports file-level access control lists. Most of the repository is visible to all Piper users;d however, important configuration files or files including business-critical algorithms can be more tightly controlled. In addition, read and write access to files in Piper is logged. If sensitive data is accidentally committed to Piper, the file in question can be purged. The read logs allow administrators to determine if anyone accessed the problematic file before it was removed.
In the Piper workflow (see Figure 4), developers create a local copy of files in the repository before changing them. These files are stored in a workspace owned by the developer. A Piper workspace is comparable to a working copy in Apache Subversion, a local clone in Git, or a client in Perforce. Updates from the Piper repository can be pulled into a workspace and merged with ongoing work, as desired (see Figure 5). A snapshot of the workspace can be shared with other developers for review. Files in a workspace are committed to the central repository only after going through the Google code-review process, as described later.
Most developers access Piper through a system called Clients in the Cloud, or CitC, which consists of a cloud-based storage backend and a Linux-only FUSE13 file system. Developers see their workspaces as directories in the file system, including their changes overlaid on top of the full Piper repository. CitC supports code browsing and normal Unix tools with no need to clone or sync state locally. Developers can browse and edit files anywhere across the Piper repository, and only modified files are stored in their workspace. This structure means CitC workspaces typically consume only a small amount of storage (an average workspace has fewer than 10 files) while presenting a seamless view of the entire Piper codebase to the developer.
All writes to files are stored as snapshots in CitC, making it possible to recover previous stages of work as needed. Snapshots may be explicitly named, restored, or tagged for review.
CitC workspaces are available on any machine that can connect to the cloud-based storage system, making it easy to switch machines and pick up work without interruption. It also makes it possible for developers to view each other's work in CitC workspaces. Storing all in-progress work in the cloud is an important element of the Google workflow process. Working state is thus available to other tools, including the cloud-based build system, the automated test infrastructure, and the code browsing, editing, and review tools.
Several workflows take advantage of the availability of uncommitted code in CitC to make software developers working with the large codebase more productive. For instance, when sending a change out for code review, developers can enable an auto-commit option, which is particularly useful when code authors and reviewers are in different time zones. When the review is marked as complete, the tests will run; if they pass, the code will be committed to the repository without further human intervention. The Google code-browsing tool CodeSearch supports simple edits using CitC workspaces. While browsing the repository, developers can click on a button to enter edit mode and make a simple change (such as fixing a typo or improving a comment). Then, without leaving the code browser, they can send their changes out to the appropriate reviewers with auto-commit enabled.
Piper can also be used without CitC. Developers can instead store Piper workspaces on their local machines. Piper also has limited interoperability with Git. Over 80% of Piper users today use CitC, with adoption continuing to grow due to the many benefits provided by CitC.
Piper and CitC make working productively with a single, monolithic source repository possible at the scale of the Google codebase. The design and architecture of these systems were both heavily influenced by the trunk-based development paradigm employed at Google, as described here.
Trunk-based development. Google practices trunk-based development on top of the Piper source repository. The vast majority of Piper users work at the "head," or most recent, version of a single copy of the code called "trunk" or "mainline." Changes are made to the repository in a single, serial ordering. The combination of trunk-based development with a central repository defines the monolithic codebase model. Immediately after any commit, the new code is visible to, and usable by, all other developers. The fact that Piper users work on a single consistent view of the Google codebase is key for providing the advantages described later in this article.
Trunk-based development is beneficial in part because it avoids the painful merges that often occur when it is time to reconcile long-lived branches. Development on branches is unusual and not well supported at Google, though branches are typically used for releases. Release branches are cut from a specific revision of the repository. Bug fixes and enhancements that must be added to a release are typically developed on mainline, then cherry-picked into the release branch (see Figure 6). Due to the need to maintain stability and limit churn on the release branch, a release is typically a snapshot of head, with an optional small number of cherry-picks pulled in from head as needed. Use of long-lived branches with parallel development on the branch and mainline is exceedingly rare.
Piper and CitC make working productively with a single, monolithic source repository possible at the scale of the Google codebase.
When new features are developed, both new and old code paths commonly exist simultaneously, controlled through the use of conditional flags. This technique avoids the need for a development branch and makes it easy to turn on and off features through configuration updates rather than full binary releases. While some additional complexity is incurred for developers, the merge problems of a development branch are avoided. Flag flips make it much easier and faster to switch users off new implementations that have problems. This method is typically used in project-specific code, not common library code, and eventually flags are retired so old code can be deleted. Google uses a similar approach for routing live traffic through different code paths to perform experiments that can be tuned in real time through configuration changes. Such A/B experiments can measure everything from the performance characteristics of the code to user engagement related to subtle product changes.
Google workflow. Several best practices and supporting systems are required to avoid constant breakage in the trunk-based development model, where thousands of engineers commit thousands of changes to the repository on a daily basis. For instance, Google has an automated testing infrastructure that initiates a rebuild of all affected dependencies on almost every change committed to the repository. If a change creates widespread build breakage, a system is in place to automatically undo the change. To reduce the incidence of bad code being committed in the first place, the highly customizable Google "presubmit" infrastructure provides automated testing and analysis of changes before they are added to the codebase. A set of global presubmit analyses are run for all changes, and code owners can create custom analyses that run only on directories within the codebase they specify. A small set of very low-level core libraries uses a mechanism similar to a development branch to enforce additional testing before new versions are exposed to client code.
An important aspect of Google culture that encourages code quality is the expectation that all code is reviewed before being committed to the repository. Most developers can view and propose changes to files anywhere across the entire codebase—with the exception of a small set of highly confidential code that is more carefully controlled. The risk associated with developers changing code they are not deeply familiar with is mitigated through the code-review process and the concept of code ownership. The Google codebase is laid out in a tree structure. Each and every directory has a set of owners who control whether a change to files in their directory will be accepted. Owners are typically the developers who work on the projects in the directories in question. A change often receives a detailed code review from one developer, evaluating the quality of the change, and a commit approval from an owner, evaluating the appropriateness of the change to their area of the codebase.
Code reviewers comment on aspects of code quality, including design, functionality, complexity, testing, naming, comment quality, and code style, as documented by the various language-specific Google style guides.e Google has written a code-review tool called Critique that allows the reviewer to view the evolution of the code and comment on any line of the change. It encourages further revisions and a conversation leading to a final "Looks Good To Me" from the reviewer, indicating the review is complete.
Google's static analysis system (Tricorder10) and presubmit infrastructure also provide data on code quality, test coverage, and test results automatically in the Google code-review tool. These computationally intensive checks are triggered periodically, as well as when a code change is sent for review. Tricorder also provides suggested fixes with one-click code editing for many errors. These systems provide important data to increase the effectiveness of code reviews and keep the Google codebase healthy.
A team of Google developers will occasionally undertake a set of wide-reaching code-cleanup changes to further maintain the health of the codebase. The developers who perform these changes commonly separate them into two phases. With this approach, a large backward-compatible change is made first. Once it is complete, a second smaller change can be made to remove the original pattern that is no longer referenced. A Google tool called Rosief supports the first phase of such large-scale cleanups and code changes. With Rosie, developers create a large patch, either through a find-and-replace operation across the entire repository or through more complex refactoring tools. Rosie then takes care of splitting the large patch into smaller patches, testing them independently, sending them out for code review, and committing them automatically once they pass tests and a code review. Rosie splits patches along project directory lines, relying on the code-ownership hierarchy described earlier to send patches to the appropriate reviewers.
Figure 7 reports the number of changes committed through Rosie on a monthly basis, demonstrating the importance of Rosie as a tool for performing large-scale code changes at Google. Using Rosie is balanced against the cost incurred by teams needing to review the ongoing stream of simple changes Rosie generates. As Rosie's popularity and usage grew, it became clear some control had to be established to limit Rosie's use to high-value changes that would be distributed to many reviewers, rather than to single atomic changes or rejected. In 2013, Google adopted a formal large-scale change-review process that led to a decrease in the number of commits through Rosie from 2013 to 2014. In evaluating a Rosie change, the review committee balances the benefit of the change against the costs of reviewer time and repository churn. We later examine this and similar trade-offs more closely.
In sum, Google has developed a number of practices and tools to support its enormous monolithic codebase, including trunk-based development, the distributed source-code repository Piper, the workspace client CitC, and workflow-support-tools Critique, CodeSearch, Tricorder, and Rosie. We discuss the pros and cons of this model here.
This section outlines and expands upon both the advantages of a monolithic codebase and the costs related to maintaining such a model at scale.
Advantages. Supporting the ultra-large-scale of Google's codebase while maintaining good performance for tens of thousands of users is a challenge, but Google has embraced the monolithic model due to its compelling advantages.
Most important, it supports:
A single repository provides unified versioning and a single source of truth. There is no confusion about which repository hosts the authoritative version of a file. If one team wants to depend on another team's code, it can depend on it directly. The Google codebase includes a wealth of useful libraries, and the monolithic repository leads to extensive code sharing and reuse.
The Google build system5 makes it easy to include code across directories, simplifying dependency management. Changes to the dependencies of a project trigger a rebuild of the dependent code. Since all code is versioned in the same repository, there is only ever one version of the truth, and no concern about independent versioning of dependencies.
Most notably, the model allows Google to avoid the "diamond dependency" problem (see Figure 8) that occurs when A depends on B and C, both B and C depend on D, but B requires version D.1 and C requires version D.2. In most cases it is now impossible to build A. For the base library D, it can become very difficult to release a new version without causing breakage, since all its callers must be updated at the same time. Updating is difficult when the library callers are hosted in different repositories.
In the open source world, dependencies are commonly broken by library updates, and finding library versions that all work together can be a challenge. Updating the versions of dependencies can be painful for developers, and delays in updating create technical debt that can become very expensive. In contrast, with a monolithic source tree it makes sense, and is easier, for the person updating a library to update all affected dependencies at the same time. The technical debt incurred by dependent systems is paid down immediately as changes are made. Changes to base libraries are instantly propagated through the dependency chain into the final products that rely on the libraries, without requiring a separate sync or migration step.
Note the diamond-dependency problem can exist at the source/API level, as described here, as well as between binaries.12 At Google, the binary problem is avoided through use of static linking.
The ability to make atomic changes is also a very powerful feature of the monolithic model. A developer can make a major change touching hundreds or thousands of files across the repository in a single consistent operation. For instance, a developer can rename a class or function in a single commit and yet not break any builds or tests.
The availability of all source code in a single repository, or at least on a centralized server, makes it easier for the maintainers of core libraries to perform testing and performance benchmarking for high-impact changes before they are committed. This approach is useful for exploring and measuring the value of highly disruptive changes. One concrete example is an experiment to evaluate the feasibility of converting Google data centers to support non-x86 machine architectures.
With the monolithic structure of the Google repository, a developer never has to decide where the repository boundaries lie. Engineers never need to "fork" the development of a shared library or merge across repositories to update copied versions of code. Team boundaries are fluid. When project ownership changes or plans are made to consolidate systems, all code is already in the same repository. This environment makes it easy to do gradual refactoring and reorganization of the codebase. The change to move a project and update all dependencies can be applied atomically to the repository, and the development history of the affected code remains intact and available.
Another attribute of a monolithic repository is the layout of the codebase is easily understood, as it is organized in a single tree. Each team has a directory structure within the main tree that effectively serves as a project's own namespace. Each source file can be uniquely identified by a single string—a file path that optionally includes a revision number. Browsing the codebase, it is easy to understand how any source file fits into the big picture of the repository.
The Google codebase is constantly evolving. More complex codebase modernization efforts (such as updating it to C++11 or rolling out performance optimizations9) are often managed centrally by dedicated codebase maintainers. Such efforts can touch half a million variable declarations or function-call sites spread across hundreds of thousands of files of source code. Because all projects are centrally stored, teams of specialists can do this work for the entire company, rather than require many individuals to develop their own tools, techniques, or expertise.
As an example of how these benefits play out, consider Google's Compiler team, which ensures developers at Google employ the most up-to-date toolchains and benefit from the latest improvements in generated code and "debuggability." The monolithic repository provides the team with full visibility of how various languages are used at Google and allows them to do codebase-wide cleanups to prevent changes from breaking builds or creating issues for developers. This greatly simplifies compiler validation, thus reducing compiler release cycles and making it possible for Google to safely do regular compiler releases (typically more than 20 per year for the C++ compilers).
Using the data generated by performance and regression tests run on nightly builds of the entire Google codebase, the Compiler team tunes default compiler settings to be optimal. For example, due to this centralized effort, Google's Java developers all saw their garbage collection (GC) CPU consumption decrease by more than 50% and their GC pause time decrease by 10%–40% from 2014 to 2015. In addition, when software errors are discovered, it is often possible for the team to add new warnings to prevent reoccurrence. In conjunction with this change, they scan the entire repository to find and fix other instances of the software issue being addressed, before turning to new compiler errors. Having the compiler-reject patterns that proved problematic in the past is a significant boost to Google's overall code health.
Storing all source code in a common version-control repository allows codebase maintainers to efficiently analyze and change Google's source code. Tools like Refaster11 and ClangMR15 (often used in conjunction with Rosie) make use of the monolithic view of Google's source to perform high-level transformations of source code. The monolithic codebase captures all dependency information. Old APIs can be removed with confidence, because it can be proven that all callers have been migrated to new APIs. A single common repository vastly simplifies these tools by ensuring atomicity of changes and a single global view of the entire repository at any given time.
An important aspect of Google culture that encourages code quality is the expectation that all code is reviewed before being committed to the repository.
Costs and trade-offs. While important to note a monolithic codebase in no way implies monolithic software design, working with this model involves some downsides, as well as trade-offs, that must be considered.
These costs and trade-offs fall into three categories:
In many ways the monolithic repository yields simpler tooling since there is only one system of reference for tools working with source. However, it is also necessary that tooling scale to the size of the repository. For instance, Google has written a custom plug-in for the Eclipse integrated development environment (IDE) to make working with a massive codebase possible from the IDE. Google's code-indexing system supports static analysis, cross-referencing in the code-browsing tool, and rich IDE functionality for Emacs, Vim, and other development environments. These tools require ongoing investment to manage the ever-increasing scale of the Google codebase.
Beyond the investment in building and maintaining scalable tooling, Google must also cover the cost of running these systems, some of which are very computationally intensive. Much of Google's internal suite of developer tools, including the automated test infrastructure and highly scalable build infrastructure, are critical for supporting the size of the monolithic codebase. It is thus necessary to make trade-offs concerning how frequently to run this tooling to balance the cost of execution vs. the benefit of the data provided to developers.
The monolithic model makes it easier to understand the structure of the codebase, as there is no crossing of repository boundaries between dependencies. However, as the scale increases, code discovery can become more difficult, as standard tools like grep bog down. Developers must be able to explore the codebase, find relevant libraries, and see how to use them and who wrote them. Library authors often need to see how their APIs are being used. This requires a significant investment in code search and browsing tools. However, Google has found this investment highly rewarding, improving the productivity of all developers, as described in more detail by Sadowski et al.9
Access to the whole codebase encourages extensive code sharing and reuse. Some would argue this model, which relies on the extreme scalability of the Google build system, makes it too easy to add dependencies and reduces the incentive for software developers to produce stable and well-thought-out APIs.
Due to the ease of creating dependencies, it is common for teams to not think about their dependency graph, making code cleanup more error-prone. Unnecessary dependencies can increase project exposure to downstream build breakages, lead to binary size bloating, and create additional work in building and testing. In addition, lost productivity ensues when abandoned projects that remain in the repository continue to be updated and maintained.
Several efforts at Google have sought to rein in unnecessary dependencies. Tooling exists to help identify and remove unused dependencies, or dependencies linked into the product binary for historical or accidental reasons, that are not needed. Tooling also exists to identify underutilized dependencies, or dependencies on large libraries that are mostly unneeded, as candidates for refactoring.7 One such tool, Clipper, relies on a custom Java compiler to generate an accurate cross-reference index. It then uses the index to construct a reachability graph and determine what classes are never used. Clipper is useful in guiding dependency-refactoring efforts by finding targets that are relatively easy to remove or break up.
A developer can make a major change touching hundreds or thousands of files across the repository in a single consistent operation.
Dependency-refactoring and cleanup tools are helpful, but, ideally, code owners should be able to prevent unwanted dependencies from being created in the first place. In 2011, Google started relying on the concept of API visibility, setting the default visibility of new APIs to "private." This forces developers to explicitly mark APIs as appropriate for use by other teams. A lesson learned from Google's experience with a large monolithic repository is such mechanisms should be put in place as soon as possible to encourage more hygienic dependency structures.
The fact that most Google code is available to all Google developers has led to a culture where some teams expect other developers to read their code rather than providing them with separate user documentation. There are pros and cons to this approach. No effort goes toward writing or keeping documentation up to date, but developers sometimes read more than the API code and end up relying on underlying implementation details. This behavior can create a maintenance burden for teams that then have trouble deprecating features they never meant to expose to users.
This model also requires teams to collaborate with one another when using open source code. An area of the repository is reserved for storing open source code (developed at Google or externally). To prevent dependency conflicts, as outlined earlier, it is important that only one version of an open source project be available at any given time. Teams that use open source software are expected to occasionally spend time upgrading their codebase to work with newer versions of open source libraries when library upgrades are performed.
Google invests significant effort in maintaining code health to address some issues related to codebase complexity and dependency management. For instance, special tooling automatically detects and removes dead code, splits large refactorings and automatically assigns code reviews (as through Rosie), and marks APIs as deprecated. Human effort is required to run these tools and manage the corresponding large-scale code changes. A cost is also incurred by teams that need to review an ongoing stream of simple refactorings resulting from codebase-wide clean-ups and centralized modernization efforts.
As the popularity and use of distributed version control systems (DVCSs) like Git have grown, Google has considered whether to move from Piper to Git as its primary version-control system. A team at Google is focused on supporting Git, which is used by Google's Android and Chrome teams outside the main Google repository. The use of Git is important for these teams due to external partner and open source collaborations.
The Git community strongly suggests and prefers developers have more and smaller repositories. A Git-clone operation requires copying all content to one's local machine, a procedure incompatible with a large repository. To move to Git-based source hosting, it would be necessary to split Google's repository into thousands of separate repositories to achieve reasonable performance. Such reorganization would necessitate cultural and workflow changes for Google's developers. As a comparison, Google's Git-hosted Android codebase is divided into more than 800 separate repositories.
Given the value gained from the existing tools Google has built and the many advantages of the monolithic codebase structure, it is clear that moving to more and smaller repositories would not make sense for Google's main repository. The alternative of moving to Git or any other DVCS that would require repository splitting is not compelling for Google.
Current investment by the Google source team focuses primarily on the ongoing reliability, scalability, and security of the in-house source systems. The team is also pursuing an experimental effort with Mercurial,g an open source DVCS similar to Git. The goal is to add scalability features to the Mercurial client so it can efficiently support a codebase the size of Google's. This would provide Google's developers with an alternative of using popular DVCS-style workflows in conjunction with the central repository. This effort is in collaboration with the open source Mercurial community, including contributors from other companies that value the monolithic source model.
Google chose the monolithic-source-management strategy in 1999 when the existing Google codebase was migrated from CVS to Perforce. Early Google engineers maintained that a single repository was strictly better than splitting up the codebase, though at the time they did not anticipate the future scale of the codebase and all the supporting tooling that would be built to make the scaling feasible.
Over the years, as the investment required to continue scaling the centralized repository grew, Google leadership occasionally considered whether it would make sense to move from the monolithic model. Despite the effort required, Google repeatedly chose to stick with the central repository due to its advantages.
The monolithic model of source code management is not for everyone. It is best suited to organizations like Google, with an open and collaborative culture. It would not work well for organizations where large parts of the codebase are private or hidden between groups.
At Google, we have found, with some investment, the monolithic model of source management can scale successfully to a codebase with more than one billion files, 35 million commits, and thousands of users around the globe. As the scale and complexity of projects both inside and outside Google continue to grow, we hope the analysis and workflow described in this article can benefit others weighing decisions on the long-term structure for their codebases.
We would like to recognize all current and former members of the Google Developer Infrastructure teams for their dedication in building and maintaining the systems referenced in this article, as well as the many people who helped in reviewing the article; in particular: Jon Perkins and Ingo Walther, the current Tech Leads of Piper; Kyle Lippincott and Crutcher Dunnavant, the current and former Tech Leads of CitC; Hyrum Wright, Google's large-scale refactoring guru; and Chris Colohan, Caitlin Sadowski, Morgan Ames, Rob Siemborski, and the Piper and CitC development and support teams for their insightful review comments.
1. Bloch, D. Still All on One Server: Perforce at Scale. Google White Paper, 2011; http://info.perforce.com/rs/perforce/images/GoogleWhitePaper-StillAllonOneServer-PerforceatScale.pdf
2. Chang, F., Dean, J., Ghemawat, S., Hsieh, W.C., Wallach, D.A., Burrows, M., Chandra, T., Fikes, A., and Gruber, R.E. Bigtable: A distributed storage system for structured data. ACM Transactions on Computer Systems 26, 2 (June 2008).
3. Corbett, J.C., Dean, J., Epstein, M., Fikes, A., Frost, C., Furman, J., Ghemawat, S., Gubarev, A., Heiser, C., Hochschild, P. et al. Spanner: Google's globally distributed database. ACM Transactions on Computer Systems 31, 3 (Aug. 2013).
4. Gabriel, R.P., Northrop, L., Schmidt, D.C., and Sullivan, K. Ultra-large-scale systems. In Companion to the 21st ACM SIGPLAN Symposium on Object-Oriented Programming Systems, Languages, and Applications (Portland, OR, Oct. 22-26). ACM Press, New York, 2006, 632–634.
5. Kemper, C. Build in the Cloud: How the Build System works. Google Engineering Tools blog post, 2011; http://google-engtools.blogspot.com/2011/08/build-in-cloud-how-build-system-works.html
6. Lamport, L. Paxos made simple. ACM Sigact News 32, 4 (Nov. 2001), 18–25.
7. Morgenthaler, J.D., Gridnev, M., Sauciuc, R., and Bhansali, S. Searching for build debt: Experiences managing technical debt at Google. In Proceedings of the Third International Workshop on Managing Technical Debt (Zürich, Switzerland, June 2-9). IEEE Press Piscataway, NJ, 2012, 1–6.
8. Ren, G., Tune, E., Moseley, T., Shi, Y., Rus, S., and Hundt, R. Google-wide profiling: A continuous profiling infrastructure for data centers. IEEE Micro 30, 4 (2010), 65–79.
9. Sadowski, C., Stolee, K., and Elbaum, S. How developers search for code: A case study. In Proceedings of the 10th Joint Meeting on Foundations of Software Engineering (Bergamo, Italy, Aug. 30-Sept. 4). ACM Press, New York, 2015, 191–201.
10. Sadowski, C., van Gogh, J., Jaspan, C., Soederberg, E., and Winter, C. Tricorder: Building a program analysis ecosystem. In Proceedings of the 37th International Conference on Software Engineering, Vol. 1 (Firenze, Italy, May 16-24). IEEE Press Piscataway, NJ, 2015, 598–608.
11. Wasserman, L. Scalable, example-based refactorings with Refaster. In Proceedings of the 2013 ACM Workshop on Refactoring Tools (Indianapolis, IN, Oct. 26-31). ACM Press, New York, 2013, 25–28.
12. Wikipedia. Dependency hell. Accessed Jan. 20, 2015; http://en.wikipedia.org/w/index.php?title=Dependency_hell&oldid=634636715
13. Wikipedia. Filesystem in userspace. Accessed June, 4, 2015; http://en.wikipedia.org/w/index.php?title=Filesystem_in_Userspace&oldid=664776514
14. Wikipedia. Linux kernel. Accessed Jan. 20, 2015; http://en.wikipedia.org/w/index.php?title=Linux_kernel&oldid=643170399
15. Wright, H.K., Jasper, D., Klimek, M., Carruth, C., and Wan, Z. Large-scale automated refactoring using ClangMR. In Proceedings of the IEEE International Conference on Software Maintenance (Eindhoven, The Netherlands, Sept. 22-28). IEEE Press, 2013, 548–551.
Rachel Potvin (firstname.lastname@example.org) is an engineering manager at Google, Mountain View, CA.
Josh Levenberg (email@example.com) is a software engineer at Google, Mountain View, CA.
a. Total size of uncompressed content, excluding release branches.
b. Includes only reviewed and committed code and excludes commits performed by automated systems, as well as commits to release branches, data files, generated files, open source files imported into the repository, and other non-source-code files.
c. Google open sourced a subset of its internal build system; see http://www.bazel.io
d. Over 99% of files stored in Piper are visible to all full-time Google engineers.
f. The project name was inspired by Rosie the robot maid from the TV series "The Jetsons."