Skip to content Skip to footer

Uncovering Hidden Efficiency

Uncovering hidden efficiency: computers today are slow—really slow. That might sound like an odd statement considering the incredible ways they build virtual worlds, can churn out aggregate statistics in massive spreadsheets within milliseconds, and otherwise perform savant-like feats of mathematical wizardry (and, in all fairness, fast and slow don’t really mean anything without context). Well, consider this: the human brain contains roughly 86 billion neurons, each with an average of about 7000 synapses transmitting messages at nearly the speed of light. What does that work out to in terms of processing power? Unfortunately, there’s no real way to convert human cognitive function to the processing power of today’s computers.

In his book “The Singularity is Near,” Ray Kurzweil (former Head of Engineering at Google and one of the world’s top experts on natural language processing) estimated that a human brain is capable of approximately 10 quadrillion floating point operations per second (10 petaflops). On the other end of the spectrum, estimates based on the signalling speed of the brain place the processing power around 1000 petaflops. As of June of this year, the top ranked supercomputer was capable of an estimated 17.2 petaflops, which is barely 1/60th of the upper estimate but may give Kurzweil cause to change the title of his book. Before worrying about the rise of A.I., remember that figure of about 7000 synapses per human neuron? Well, the FLOPS estimates still aren’t factoring in the orders of magnitude increase in time that it takes to shuffle electrons around a room-sized supercomputer. So, depending on what estimate of human petaflops you run with the most powerful supercomputer in the world it’s either comparable to a near comatose human or along the lines of a very senile mouse!

What that means for machine-learning companies like Youneeq is that every bit of processing power counts. But the good news is that those 86 billion or so human neurons carried by each of our staff are put to good use finding clever shortcuts to solving our customers’ real-world problems. If we need our software to possess an eidetic memory then we can give it one. If we need it to forget some of what it’s learned, well we can do that as well (chances are if you’ve had to work through the technical side of “Big Data” then you can relate to the complexities of learning how to effectively forget without retaining an air of senility). If we tune things a bit further then we might reserve blocks of memory to store components and results of the most used and computationally expensive calculations, and all other forms of meta-data that start to get more-and-more symbolic. We can also scale up or down just how much work we do to return a recommendation, search result, or trending item alert. What gets really exciting (among data analysts, infrastructure gurus, and people who spend way too long away from natural light)is when we can determine how to quickly train a computer to figure out what trade-offs will return near-optimal results within a complex problem-space. For example, our recommendation service can adapt to increases in network latency within a fraction of a millisecond by scaling back its work, and just as quickly scale back up when network performance improves.

One of the best known myths about the brain is that we only use 10% of ours, which would be a profound flaw in our biology if it were true (it’s actually very near 100%). Computers, however, are an entirely different matter. Few desktop computers average 10% CPU utilization over the course of a day, and much of the servers powering the plethora of services across the web fair similarly. When a service needs to be available 24/7, infrastructure has to be planned out to deal with worst case scenarios. If you’re running a shopping site then you might have to factor in 100x increases over average traffic, and deal with that occurring faster than you can spin up extra servers to handle the load (that very use case led to the origin of Amazon Web Services to reduce Amazon’s costs for provisioning extra resources). Automating those scaling tasks helps to shorten that gap, but it still means at least minutes of a painfully slow site during a peak revenue window, while you desperately wait for those new machines to take up some of the burden. In the case of Youneeq, a spike like that means our algorithms can leverage the volume of traffic to converge on a meaningful result in less steps, can determine from the responsiveness of our service whether we might want to perform additional tasks or do progressively less, and adjust all of that within fractions of a millisecond.

For our customers what that means is we provide powerful analytics, high availability, and compelling features, while running lean on hosting costs for the performance we deliver!

Uncovering Hidden Efficiency - Youneeq - Cookieless AI personalization engine

Let’s get this demo started

Drop us a line and we’d love to organize a time to meet and demo our cookieless AI solution.