SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksAlphabet Inc. (Google)


Previous 10 Next 10 
From: Frank Sully8/15/2021 3:35:26 PM
   of 15602
 
Demonstrating the Fundamentals of Quantum Error Correction

Wednesday, August 11, 2021

Posted by Jimmy Chen, Quantum Research Scientist and Matt McEwen, Student Researcher, Google Quantum AI

The Google Quantum AI team has been building quantum processors made of superconducting quantum bits ( qubits) that have achieved the first beyond-classical computation, as well as the largest quantum chemical simulations to date. However, current generation quantum processors still have high operational error rates — in the range of 10-3 per operation, compared to the 10-12 believed to be necessary for a variety of useful algorithms. Bridging this tremendous gap in error rates will require more than just making better qubits — quantum computers of the future will have to use quantum error correction (QEC).

The core idea of QEC is to make a logical qubitby distributing its quantum state across many physical data qubits. When a physical error occurs, one can detect it by repeatedly checking certain properties of the qubits, allowing it to be corrected, preventing any error from occurring on the logical qubit state. While logical errors may still occur if a series of physical qubits experience an error together, this error rate should exponentially decrease with the addition of more physical qubits (more physical qubits need to be involved to cause a logical error). This exponential scaling behavior relies on physical qubit errors being sufficiently rare andindependent. In particular, it’s important to suppress correlated errors, where one physical error simultaneously affects many qubits at once or persists over many cycles of error correction. Such correlated errors produce more complex patterns of error detections that are more difficult to correct and more easily cause logical errors.

Our team has recently implemented the ideas of QEC in our Sycamore architecture using quantum repetition codes. These codes consist of one-dimensional chains of qubits that alternate between data qubits, which encode the logical qubit, and measure qubits, which we use to detect errors in the logical state. While these repetition codes can only correct for one kind of quantum error at a time 1, they contain all of the same ingredients as more sophisticated error correction codes and require fewer physical qubits per logical qubit, allowing us to better explore how logical errors decrease as logical qubit size grows.

In “ Removing leakage-induced correlated errors in superconducting quantum error correction”, published in Nature Communications, we use these repetition codes to demonstrate a new technique for reducing the amount of correlated errors in our physical qubits. Then, in “ Exponential suppression of bit or phase flip errors with repetitive error correction”, published in Nature, we show that the logical errors of these repetition codes are exponentially suppressed as we add more and more physical qubits, consistent with expectations from QEC theory.

Layout of the repetition code (21 qubits, 1D chain) and distance-2 surface code (7 qubits) on the Sycamore device.
Leaky Qubits

The goal of the repetition code is to detect errors on the data qubits without measuring their states directly. It does so by entanglingeach pair of data qubits with their shared measure qubit in a way that tells us whether those data qubit states are the same or different (i.e., their parity) without telling us the states themselves. We repeat this process over and over in rounds that last only one microsecond. When the measured parities change between rounds, we’ve detected an error.

However, one key challenge stems from how we make qubits out of superconducting circuits. While a qubit needs only two energy states, which are usually labeled |0? and |1?, our devices feature a ladder of energy states, |0?, |1?, |2?, |3?, and so on. We use the two lowest energy states to encode our qubit with information to be used for computation (we call these the computational states). We use the higher energy states (|2?, |3? and higher) to help achieve high-fidelity entangling operations, but these entangling operations can sometimes allow the qubit to “leak” into these higher states, earning them the name leakage states.

Population in the leakage states builds up as operations are applied, which increases the error of subsequent operations and even causes other nearby qubits to leak as well — resulting in a particularly challenging source of correlated error. In our early 2015 experiments on error correction, we observed that as more rounds of error correction were applied, performance declined as leakage began to build.

Mitigating the impact of leakage required us to develop a new kind of qubit operation that could “empty out” leakage states, called multi-level reset. We manipulate the qubit to rapidly pump energy out into the structures used for readout, where it will quickly move off the chip, leaving the qubit cooled to the |0? state, even if it started in |2? or |3?. Applying this operation to the data qubits would destroy the logical state we’re trying to protect, but we can apply it to the measure qubits without disturbing the data qubits. Resetting the measure qubits at the end of every round dynamically stabilizes the device so leakage doesn’t continue to grow and spread, allowing our devices to behave more like ideal qubits.

Applying the multi-level reset gate to the measure qubits almost totally removes leakage, while also reducing the growth of leakage on the data qubits.
Exponential Suppression

Having mitigated leakage as a significant source of correlated error, we next set out to test whether the repetition codes give us the predicted exponential reduction in error when increasing the number of qubits. Every time we run our repetition code, it produces a collection of error detections. Because the detections are linked to pairs of qubits rather than individual qubits, we have to look at all of the detections to try to piece together where the errors have occurred, a procedure known as decoding. Once we’ve decoded the errors, we then know which corrections we need to apply to the data qubits. However, decoding can fail if there are too many error detections for the number of data qubits used, resulting in a logical error.

To test our repetition codes, we run codes with sizes ranging from 5 to 21 qubits while also varying the number of error correction rounds. We also run two different types of repetition codes — either a phase-flip code or bit-flip code — that are sensitive to different kinds of quantum errors. By finding the logical error probability as a function of the number of rounds, we can fit a logical error rate for each code size and code type. In our data, we see that the logical error rate does in fact get suppressed exponentially as the code size is increased.

Probability of getting a logical error after decoding versus number of rounds run, shown for various sizes of phase-flip repetition code.
We can quantify the error suppression with the error scaling parameter Lambda (?), where a Lambda value of 2 means that we halve the logical error rate every time we add four data qubits to the repetition code. In our experiments, we find Lambda values of 3.18 for the phase-flip code and 2.99 for the bit-flip code. We can compare these experimental values to a numerical simulation of the expected Lambda based on a simple error model with no correlated errors, which predicts values of 3.34 and 3.78 for the bit- and phase-flip codes respectively.

Logical error rate per round versus number of qubits for the phase-flip (X) and bit-flip (Z) repetition codes. The line shows an exponential decay fit, and ? is the scale factor for the exponential decay.
This is the first time Lambda has been measured in any platform while performing multiple rounds of error detection. We’re especially excited about how close the experimental and simulated Lambda values are, because it means that our system can be described with a fairly simple error model without many unexpected errors occurring. Nevertheless, the agreement is not perfect, indicating that there’s more research to be done in understanding the non-idealities of our QEC architecture, including additional sources of correlated errors.

What’s Next

This work demonstrates two important prerequisites for QEC: first, the Sycamore device can run many rounds of error correction without building up errors over time thanks to our new reset protocol, and second, we were able to validate QEC theory and error models by showing exponential suppression of error in a repetition code. These experiments were the largest stress test of a QEC system yet, using 1000 entangling gates and 500 qubit measurements in our largest test. We’re looking forward to taking what we learned from these experiments and applying it to our target QEC architecture, the 2D surface code, which will require even more qubits with even better performance.

1A true quantum error correcting code would require a two dimensional array of qubits in order to correct for all of the errors that could occur. ?

ai.googleblog.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/16/2021 12:06:09 PM
1 Recommendation   of 15602
 
The Nuro EC-1

Kirsten Korosec, Mark Harris

/ 11:03 AM EDT•August 16, 2021


Image Credits: Nigel Sussman

Six years ago, I sat in the Google self-driving project’s Firefly vehicle — which I described, at the time, as a “little gumdrop on wheels” — and let it ferry me around a closed course in Mountain View, California.

Little did I know that two of the people behind Firefly’s ability to see and perceive the world around it and react to that information would soon leave to start and steer an autonomous vehicle company of their very own.

Dave Ferguson and Jiajun Zhu aren’t the only Google self-driving project employees to launch an AV startup, but they might be the most underrated. Their company, Nuro, is valued at $5 billion and has high-profile partnerships with leaders in retail, logistics and food including FedEx, Domino’s and Walmart. And, they seem to have navigated the regulatory obstacle course with success — at least so far.

Yet, Nuro has remained largely in the shadows of other autonomous vehicle companies. Perhaps it’s because Nuro’s focus on autonomous delivery hasn’t captured the imagination of a general public that envisions themselves being whisked away in a robotaxi. Or it might be that they’re quieter.

Those quiet days might be coming to an end soon.

This series aims to look under Nuro’s hood, so to speak, from its earliest days as a startup to where it might be headed next — and with whom.

The lead writer of this EC-1 is Mark Harris, a freelance reporter known for investigative and long-form articles on science and technology. Our resident scoop machine, Harris is based in Seattle and also writes for Wired, The Guardian, The Economist, MIT Technology Review and Scientific American. He has broken stories about self-driving vehicles, giant airships, AI body scanners, faulty defibrillators and monkey-powered robots. In 2014, he was a Knight Science Journalism Fellow at MIT, and in 2015 he won the AAAS Kavli Science Journalism Gold Award.

The lead editor of this EC-1 was Kirsten Korosec, transportation editor at TechCrunch (that’s me), who has been writing about autonomous vehicles and the people behind them since 2014; OK maybe earlier. The assistant editor for this series was Ram Iyer, the copy editor wasRichard Dal Porto, and illustrations were drawn by Nigel Sussman. The EC-1 series editor is Danny Crichton.

The Nuro EC-1 comprises four articles numbering 10,600 words and a reading time of 43 minutes. Here are the topics we’ll be dialing into:

1: Origin story “ How Google’s self-driving car project accidentally spawned its robotic delivery rival” (3,200 words/13 minutes) — explores the early days of Nuro and its founders and how they realized that the most useful robot they could ever build would be an autonomous delivery vehicle.

Part 2: Regulations “ Why regulators love Nuro’s self-driving delivery vehicles” (2,400 words/10 minutes) — analyzes how Nuro won over federal regulators and navigated a patchwork of policies to be able to test and deploy its autonomous delivery vehicles in several states.

Part 3: Partnerships “ How Nuro became the robotic face of Domino’s” (2,500 words/10 minutes) — digs into Nuro’s relationship with Domino’s as well its trials with other companies including fast casual restaurant chain Chipotle, Kroger grocery stores and CVS pharmacies and how those relationships have affected the design and functions of the R2 bot.

Part 4: Operations “ Here’s what the inevitable friendly neighborhood robot invasion looks like” (2,500 words/10 minutes) — examines where Nuro has set up shop, and more importantly, how it goes about picking locations and learning from its experiences.
We’re always iterating on the EC-1 format. If you have questions, comments or ideas, please send an email to TechCrunch Managing Editor Danny Crichton at danny@techcrunch.com.

techcrunch.com

How Google’s self-driving car project accidentally spawned its robotic delivery rivalNuro EC-1 Part 1: Origin story Mark Harris @meharris / 11:03 AM EDT•August 16, 2021


Image Credits: Nigel Sussman

Nuro doesn’t have a typical Silicon Valley origin story. It didn’t emerge after a long, slow slog from a suburban garage or through a flash of insight in a university laboratory. Nor was it founded at the behest of an eccentric billionaire with money to burn.

Nuro was born — and ramped up quickly — thanks to a cash windfall from what is now one of its biggest rivals.

In the spring of 2016, Dave Ferguson and Jiajun Zhu were teammates on Google’s self-driving car effort. Ferguson was directing the project’s computer vision, machine learning and behavior prediction teams, while Zhu (widely JZ) was in charge of the car’s perception technologies and cutting-edge simulators.

“We both were leading pretty large teams and were responsible for a pretty large portion of the Google car’s software system,” Zhu recalls.

As Google prepared to spin out its autonomous car tech into the company that would become Waymo, it first needed to settle a bonus program devised in the earliest days of its so-called Chauffeur project. Under the scheme, early team members could choose staggered payouts over a period of eight years — or leave Google and get a lump sum all at once.

Ferguson and Zhu would not confirm the amount they received, but court filings released as part of Waymo’s trade secrets case against Uber suggest they each received payouts in the neighborhood of $40 million by choosing to leave.

“What we were fortunate enough to receive as part of the self-driving car project enabled us to take riskier opportunities, to go and try to build something that had a significant chance of not working out at all,” Ferguson says.

Within weeks of their departure, the two had incorporated Nuro Inc, a company with the non-ironic mission to “better everyday life through robotics.” Its first product aimed to take a unique approach to self-driving cars: Road vehicles with all of the technical sophistication and software smarts of Google’s robotaxis, but none of the passengers.

In the five years since, Nuro’s home delivery robots have proven themselves smart, safe and nimble, outpacing Google’s vehicles to secure the first commercial deployment permit for autonomous vehicles in California, as well as groundbreaking concessions from the U.S. government.

While robotaxi companies struggle with technical hitches and regulatory red tape, Nuro has already made thousands of robotic pizza and grocery deliveries across the U.S., and Ferguson (as president) and Zhu (as CEO) are now heading a company that as of its last funding round in November 2020 valued it at $5 billion with more than 1,000 employees.

But how did they get there so fast, and where are they headed next?

Turning money into robots“Neither JZ nor I think of ourselves as classic entrepreneurs or that starting a company is something we had to do in our lives,” Ferguson says. “It was much more the result of soul searching and trying to figure out what is the biggest possible impact that we could have.”

Subscription required for remainder of article.

techcrunch.com

Why regulators love Nuro’s self-driving delivery vehiclesNuro EC-1 Part 2: Regulations Mark Harris

@meharris / 11:03 AM EDT•August 16, 2021


Image Credits: Nigel Sussman

Nuro’s delivery autonomousvehicles (AVs) don’t have a human driver on board. The company’s founders Dave Ferguson (president) and Jiajun Zhu’s (CEO) vision of a driverless delivery vehicle sought to do away with a lot of the stuff that is essential for a normal car to have, like doors and airbags and even a steering wheel. They built an AV that spared no room in the narrow chassis for a driver’s seat, and had no need for an accelerator, windshield or brake pedals.

So when the company petitioned the U.S. government in 2018 for a minor exemption from rules requiring a rearview mirror, backup camera and a windshield, Nuro might have assumed the process wouldn’t be very arduous.

They were wrong.

In a 2019 letter to the U.S. Department of Transportation, The American Association of Motor Vehicle Administrators (AAMVA) “[wondered] about the description of pedestrian ‘crumple zones,’ and whether this may impact the vehicle’s crash-worthiness in the event of a vehicle-to-vehicle crash. Even in the absence of passengers, AAMVA has concerns about cargo ejection from the vehicle and how Nuro envisions protections from loose loads affecting the driving public.”

The National Society of Professional Engineers similarly complained that Nuro’s request lacked information about the detection of moving objects. “How would the R2X function if a small child darts onto the road from the passenger side of the vehicle as a school bus is approaching from the driver’s side?” it asked. It also recommended the petition be denied until Nuro could provide a more detailed cybersecurity plan against its bots being hacked or hijacked. (R2X is now referred to as R2)

The Alliance of Automobile Manufacturers (now the Alliance Automotive Innovation), which represents most U.S. carmakers, wrote that the National Highway Transportation Safety Agency (NHTSA) should not use Nuro’s kind of petition to “introduce new safety requirements for [AVs] that have not gone through the rigorous rule-making process.”

“What you can see is that many comments came from entrenched interests,” said David Estrada, Nuro’s chief legal and policy officer. “And that’s understandable. There are multibillion dollar industries that can be disrupted if autonomous vehicles become successful.”

To be fair, critical comments also came from nonprofit organizations genuinely concerned about unleashing robots on city streets. The Center for Auto Safety, an independent consumer group, thought that Nuro did not provide enough information on its development and testing, nor any meaningful comparison with the safety of similar, human-driven vehicles. “Indeed, the planned reliance on ‘early on-road tests … with human-manned professional safety drivers’ suggests that Nuro has limited confidence in R2X’s safe operation,” it wrote.

Nuro’s R2 delivery autonomous vehicle. Image Credits: Nuro

Despite such concerns, the National Highway Traffic Safety Administration (NHTSA) granted Nuro the exemptions it sought in February last year. Up to 5,000 R2 vehicles could be produced for a limited period of two years and subject to Nuro reporting any incidents, without a windshield, rearview mirror or backup camera. Although only a small concession, it was the first — and so far, only — time the U.S. government had relaxed vehicle safety requirements for an AV.

Now Estrada and Nuro hope to use that momentum to chip away at a mountain of regulations that never envisaged vehicles controlled by on-board robots or distant humans, extending from the foothills of local and state government to the peaks of federal and international safety rules.

If Nuro is to become the generation-defining company its founders desire, it will be due as much to innovation in regulation as advances in the technology it develops.

Regulate for success“I don’t think any of the credible, big AV players want this to be a free-for-all,” said Dave Ferguson, Nuro’s co-founder and president. “We need the confidence of a clear regulatory framework to invest the hundreds of millions or billions of dollars necessary to manufacture vehicles at scale. Otherwise, it’s really going to limit our ability to deploy.”

Subscription required for remainder of article.

techcrunch.com

How Nuro became the robotic face of Domino’sNuro EC-1 Part 3: Partnerships

Mark Harris @meharris / 11:02 AM EDT•August 16, 2021


Image Credits: Nigel Sussman

Pandemic pizza was definitely a thing.

U.S. consumers forked out a record-breaking $14 billion to have pizza delivered to their doors in 2020, and nearly half of that total was spent with just one brand: Domino’s.

“Domino’s is the home of pizza delivery,” said Dennis Maloney, Domino’s chief innovation officer. “Delivery is at the core of who we are, so it’s very important for us to lead when it comes to the consumer experience of delivery.”

In its latest TV ad, an order of Domino’s pizza speeds to its destination inside a Nuro R2X delivery autonomous vehicle (AV). The R2X (now known as R2) deftly avoids potholes, falling trees and traffic jams caused by The Noid — a character created by Domino’s in the 1980s to symbolize the difficulties of delivering a pizza in 30 minutes or less.

The reality is much more sedate. Domino’s currently has just one R2X that operates from a single Domino’s store on the generally calm streets of Woodland Heights in Houston, Texas. And since the AV’s introduction in April, The Noid has yet to put in an appearance.

“The R2X adds a bunch of efficiencies while not taking away from any existing capabilities,” Maloney said. “As we start getting the bot into regular operation, we’ll see if it plays out the way we expect it to. So far, all the indications are good.”

Nuro and Domnio’s launched the autonomous pizza delivery service in Huston in April this year. Image Credits: Nuro

Partnerships are key for Nuro. The company’s business model is to sign contracts with established brands that either have their own branded vehicles or use traditional delivery companies like UPS or the U.S. Postal Service.

Nuro is carrying out trials and pilot deliveries with a number of companies, including fast casual restaurant chain Chipotle, Kroger grocery stores, CVS pharmacies, bricks-and-mortar retail behemoth Walmart, and, most recently, global parcel courier FedEx. While it is a dizzyingly impressive list for a company less than five years old, their interest was driven as much by global trends as by Nuro’s technology, admits Cosimo Leipold, head of partnerships at Nuro.

“Everybody today wants what they want and they want it faster than ever, but frankly they’re not willing to pay for it,” Leipold said. “We’ve reached a point where almost every company is going to have to offer delivery services, and now it’s just the question of how they’ll do it in the best possible way and with the most possible control.”

Nuro’s delivery AVs — aka bots — offer the tantalizing promise of safe, reliable and efficient delivery without sacrificing revenue and customer data to third-party platforms like Grubhub, DoorDash or Instacart. Alongside Nuro’s stated aim of driving the cost of delivery down to zero, it is little surprise that Nuro now finds itself in the enviable position of being able to pick and choose the partners it wants — and the less enviable position of having to choose which partner to prioritize.

Here’s the story of how one of Nuro’s biggest partnerships came to be, and the lessons and companies that will drive its future growth.

Deliveries with extra cheeseDomino’s has a long history of innovating in delivery, usually accompanied by a strong marketing campaign. In the 1980s, the company bought 10 customized Tritan Aerocar 2s, a Jetsons-styled three-wheeler, for use as delivery vehicles.

Subscription required for remainder of article.

techcrunch.com

Here’s what the inevitable friendly neighborhood robot invasion looks likeNuro EC-1 Part 4: Operations

Mark Harris @meharris / 11:02 AM EDT•August 16, 2021


Image Credits: Nigel Sussman

In early 2021, a Nuro autonomous delivery vehicle pulled to a halt at a four-way stop in its hometown of Mountain View, California to let a user cross. This seemingly humdrum moment quickly looked like a decidedly science fiction storyline — the user was a small sidewalk robot from another startup on its own mission.

“Obviously, we yielded to it, but it was, wow, we have entered a different world,” said Amy Jones Satrom, head of Operations at Nuro.

Mountain View is home to competitor Waymo and other autonomous vehicle testing activity. But for those who want to take part in that science fiction scene, Houston provides the full experience.

Waymo is testing self-driving trucks in Houston, and a fully driverless shuttle service is due to start public service there early next year. Nuro’s Texas effort started in April, when an R2 robot began its commercial pizza delivery service in partnership with Domino’s. Some customers ordering pizzas from the Domino’s Woodland Heights store will see the option to have their pies delivered by robot.

Customers can trace the progress of the self-driving vehicle on the Domino’s app and, when it pulls up outside their home, tap in a unique PIN on its touchscreen to access their order. Nuro is also operating in Houston with Kroger supermarkets and FedEx.

Nuro team on test track during early validation in AZ, before first ever public road deployment in Arizona. Image credit: Nuro

“One of the things we laugh about is how customers constantly talk to the bot,” Dennis Maloney, Domino’s chief innovation pfficer said. “It’s almost like they think it’s ‘Knight Rider.’ It’s very common for customers to thank it or say goodbye, which is great because that indicates we’re creating an engaging experience that they’re not frustrated by.”

Creating an experience, where people want to chat with their new robot neighbors instead of chasing them down the street with pitchforks, falls to Jones Satrom’s operations team. It has to delicately balance speed, safety, convenience and congestion, even as Nuro embarks on a growth spurt that will see robots spreading to other cities, states and partners in the months ahead.

Here’s how it manages that, and what the future holds for Nuro’s ever-so-gentle robot invasion.

Mapping the territoryFew people are as well suited to overseeing Nuro’s high-stakes robot rollout as Jones Satrom, who started her career as a nuclear engineer on an aircraft carrier and previously managed the integration of Kiva Systems’ robots into Amazon’s warehouses.

Subscription required for remainder of article.

techcrunch.com.

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/16/2021 12:26:56 PM
   of 15602
 
CNR in Milan develops super computer that beats Google Won challenge with AI and deep learning

(ANSAmed) - MILAN, 16 AGO - A team of researchers coordinated by Enrico Prati of the Institute for Photonics and Nanotechnologies (IFN) at the Italian National Research Council (CNR) in Milan developed a quantum computer that beats the competition of Google, in a study published in the Nature Research journal Communications Physics.

Applying artificial intelligence and deep learning to the compiler opened the way for programming an algorithm that adapts to any quantum computer based on logic gates.

The result was obtained with the collaboration of Matteo Paris of the University of Milan and Marcello Restelli of Milan Polytechnic.

"Similar to conventional computers, in which bits are subjected to calculations through logic gates, in quantum computers it is necessary to use quantum logic gates, which, however, must be programmed by a sort of operating system that knows which operations can be carried out," Prati said in the study.

"However, there are many different versions of hardware that provide different achievable operations, like a small deck of playing cards to choose from," he said.

Lorenzo Moro of CNR said the team therefore used deep learning to develop a compiler able to find the right order "for playing the five to six cards available, including with sequences hundreds of plays long, choosing one by one the right ones to form the entire sequence".

"After a training phase, which goes from a few hours to a couple days, the artificial intelligence learns how to build new pieces for every quantum logic gate, starting from the available operations, but taking just a few milliseconds," he said.

The CNR Italy research has been patented.

"Our model surpasses a similar patent by Google, which uses artificial intelligence after training but only for one logic gate at a time, after which it needs a new training".

The researchers in this study discovered how to build all the quantum logic gates with only one training, after which the solution can immediately be recalled for any logic gate, in what is known as deep learning.

Google recently inaugurated its Quantum AI Campus for the development of quantum computers in Santa Barbara, California.

Eric Lucero, the lead engineer at Google Quantum AI, explained at the inauguration how quantum computing will be necessary in the coming years.

"Looking ahead 10 years, many of the biggest global challenges, from climate change to the management of the next pandemic, will require a new type of computing," he said. (ANSAmed).


ALL RIGHTS RESERVED © Copyright ANSA

ansa.it

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/16/2021 3:32:48 PM
   of 15602
 
What is a Machine Learning Model?

Fueled by data, ML models are the mathematical engines of AI, expressions of algorithms that find patterns and make predictions faster than a human can.

August 16, 2021
by CHRIS PARSONS

When you shop for a car, the first question is what model — a Honda Civic for low-cost commuting, a Chevy Corvette for looking good and moving fast, or maybe a Ford F-150 to tote heavy loads.

For the journey to AI, the most transformational technology of our time, the engine you need is a machine learning model.

What Is an ML Model?

A machine learning model is an expression of an algorithm that combs through mountains of data to find patterns or make predictions. Fueled by data, ML models are the mathematical engines of AI.

For example, an ML model for computer vision might be able to identify cars and pedestrians in a real-time video. One for natural language processing might translate words and sentences.

Under the hood, a model is a mathematical representation of objects and their relationships to each other. The objects can be anything from “likes” on a social networking post to molecules in a lab experiment.

ML Models for Every Purpose

With no constraints on the objects that can become features in an ML model, there’s no limit to the uses for AI. The combinations are infinite.

Data scientists have created whole families of machine learning models for different uses, and more are in the works.

A Brief Taxonomy of ML Models
ML Model Type. Uses Cases
For instance, linear models use algebra to predict relationships between variables in financial projections. Graphical models express as diagrams a probability, such as whether a consumer will choose to buy a product. Borrowing the metaphor of branches, some ML models take the form of decision trees or groups of them called random forests.

In the Big Bang of AI in 2012, researchers found deep learning to be one of the most successful techniques for finding patterns and making predictions. It uses a kind of machine learning model called a neural network because it was inspired by the patterns and functions of brain cells.

An ML Model for the Masses

Deep learning took its name from the structure of its machine learning models. They stack layer upon layer of features and their relationships, forming a mathematical hero sandwich.

Thanks to their uncanny accuracy in finding patterns, two kinds of deep learning models, described in a separate explainer, are appearing everywhere.

Convolutional neural networks ( CNNs), often used in computer vision, act like eyes in autonomous vehicles and can help spot diseases in medical imaging. Recurrent neural networks and transformers ( RNNs), tuned to analyze spoken and written language, are the engines of Amazon’s Alexa, Google’s Assistant and Apple’s Siri.

Deep learning neural networks got their name from their multilayered structure.

Pssssst, Pick a Pretrained Model

Choosing the right family of models — like a CNN, RNN or transformer — is a great beginning. But that’s just the start.

If you want to ride the Baja 500, you can modify a stock dune buggy with heavy duty shocks and rugged tires, or you can shop for a vehicle built for that race.

In machine learning, that’s what’s called a pretrained model. It’s tuned on large sets of training data that are similar to data in your use case. Data relationships — called weights and biases — are optimized for the intended application.

It takes an enormous dataset, a lot of AI expertise and significant compute muscle to train a model. Savvy buyers shop for pretrained models to save time and money.

Who Ya Gonna Call?

When you’re shopping for a pretrained model, find a dealer you can trust.

NVIDIA puts its name behind an online library called the NGC catalog that’s filled with vetted, pretrained models. They span the spectrum of AI jobs from computer vision and conversational AI and more.

Users know what they’re getting because models in the catalog come with résumés. They’re like the credentials of a prospective hire.

Model resumes show you the domain the model was trained for, the dataset that trained it, and how it’s expected to perform. They provide transparency and confidence you’re picking the right model for your use case.

More Resources for ML Models

What’s more, NGC models are ready for transfer learning. That’s the one final tune-up that torques models for the exact road conditions over which they’ll ride — your application’s data.

NVIDIA even provides the wrench to tune your NGC model. It’s called TAO and you can sign up for early access to it today.

To learn more, check out:

blogs.nvidia.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/16/2021 4:06:15 PM
   of 15602
 
Open IoT Platform Market Size is Set to Reach USD 5.96 Billion by 2027 at 15.5% CAGR - Report by Market Research Future (MRFR)

Globe Newswire
August 16, 2021 2:09 PM

New York, US, Aug. 16, 2021 (GLOBE NEWSWIRE) --

Market Overview:

According to a comprehensive research report by Market Research Future (MRFR), "Global Open IoT Platform Market information by Type, Application and Region – forecast to 2027" the market is estimated to reach up to USD 5.96 Billion with a CAGR of 15.5% by Forecast 2027.

Open IoT Platform Market Scope:

Internet of things (IoT) is the interconnectivity of all devices through the internet and control through a common framework. The IoT platform encompasses middleware, devices, and sensors while adhering to lightweight protocols to increase the amount of devices it can control. Open IoT platforms are meant to lower cost of centralization of devices through the cloud or on-premises in light of rapid rise of industrial automation.

Dominant Key Players on Open IoT Platform Market Covered Are:
  • Wipro Ltd.
  • Bosch Software Innovations GmbH
  • Intel Corporation
  • Siemens
  • Oracle Corporation
  • Google Inc.
  • SAP SE
  • Cisco Systems Inc.
  • Huawei Technologies Co. Ltd.
  • PTC Inc.
  • Amazon.com Inc.
  • Microsoft Corporations
Market Drivers:

The global Open IoT Platform market is driven by a centralized demand for monitoring of devices and penetration of high-speed internet. Development of IoT platforms that provide solutions for connected applications and smart products and help companies scale their capacity can bode well for the market. For instance, the Kaa IoT platform allows companies to seamlessly connect devices and increase interoperations. It enhances data management while reducing cost of services and risks. The shift to cloud-based platforms coupled with new protocols like the IPv6 can push the market demand significantly.

But privacy concerns and increased instances of cybercrimes in the IoT space can deter market growth.

Segmentation of Market Covered in the Research:

By deployment, the open IoT platform market is divided into on-premise and cloud. The cloud deployment can gain a large market share in the coming years owing to its flexible nature and ability to scale operations easily. New standards developed by open source foundations to assist cloud deployments can bolster the market growth.

By component, it is segmented into services, hardware, and software. The software component is expected to gain a large market share owing to the open-source nature of code and freedom to modify codes according to the specific application. Real-time insights on operations and applications can drive the segment demand significantly.

By size, it is segmented into small and medium enterprises and large enterprises.

By industry, the open IoT platform market is segmented into healthcare, retail, manufacturing, and automotive. The retail industry is expected to be the biggest end-user of the market owing to efforts taken by large chains to entice customers. Customer purchasing behavior and new ways to discern the performance of products can drive the demand of open source IoT platform in the industry.

Major applications in the open IoT platform market are processing and application, database management, device management, and others.

Regional Analysis

The open IoT platform market covers regions of North America, Europe, Asia Pacific (APAC), and Rest-of-the-World (RoW).

North America is touted to dominate the global market owing to high adoption of latest technologies and integration of IoT to expedite the breakthroughs in artificial intelligence and machine learning. Business models centered around IoT platforms such as shared scooter, smart elevators, household robots, and other products can bode well for the open IoT platform market. The U.S. is the biggest contributor to the region owing to large number of financial institutions embracing IoT platforms.

APAC, on the other hand, is expected to display a strong growth rate over the forecast period owing to rise of industrial automation. Adoption of smart devices and presence of reputed companies such as Samsung and Ericsson can bolster market demand in the region. Partnerships and collaborations are likely to be witnesses in the region as digital transformation takes centerstage in plans of companies to extend their expertise and bridge the gap between operational technology and information technology.

COVID-19 Impact on the Global Open IoT Platform Market

The COVID-19 pandemic has had a negligible impact on the open IoT platform market. Restrictions on coming to office and compliance of working from home had encouraged more innovations in the industry. Many industries hesitant about automation have opted for open source IoT platforms owing to its low cost compared to its counterpart. The creation of a new platform, CIoTIVID, for gathering health data through various devices by scientists can be used in tackling future cases or more variants of the virus. Policymakers, individuals, and clinics can benefits from its features for tracing of individuals and taking adequate measures for combating it in the initial stages. Collaboration with frontline healthcare workers and subject matter experts through IoT platforms to accelerate the findings of the virus and drive the development of vaccines is likely to be seen in the market.

Industry Trends

Development of new standards and protocols to ensure the entry of new platform creators in the market can bolster innovation in endpoints, sensors, software, and applications. Recently, the Fido Alliance has developed an open standard for IoT devices to connect with cloud and on-premise data management platforms. The protocol can assist device manufacturers with data configurations for making it easy to welcome first-time users and connect to the prospective IoT platform.

The Mozilla Foundation has matured its IoT platform with the help of developer and maker communities. The platform assists developers in building their own devices through software components available on its network. Additionally, the WebThings Gateway facilitates smart home owners to interconnect the devices in their home through a common platform and collect data for further improvement.

About Market Research Future:

Market Research Future (MRFR) is a global market research company that takes pride in its services, offering a complete and accurate analysis regarding diverse markets and consumers worldwide. Market Research Future has the distinguished objective of providing the optimal quality research and granular research to clients. Our market research studies by products, services, technologies, applications, end users, and market players for global, regional, and country level market segments, enable our clients to see more, know more, and do more, which help answer your most important questions.

benzinga.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/16/2021 5:31:49 PM
   of 15602
 
How Conversational AI Works and What It Does



By Scott Clark

Aug 16, 2021


PHOTO: SHUTTERSTOCK

Conversational AI, which uses Natural Language Processing (NLP), Automatic Speech Recognition (ASR), Advanced Dialog management, and Machine Learning (ML), are likely to pass the Turing Test and provide a more realistic experience than traditional chatbots. Most of us have had interactions on websites with chatbots that were less than satisfactory, leaving us to resolve our issues some other way. Today’s AI-based chatbots are able to have full blown conversations that leave people feeling like they just finished a conversation with a living person.

According to a report from MIT Technology Review, nearly 90% of those polled reported that they have recorded measurable improvements in the resolution speed for complaints, and over 80% reported enhanced call volume processing using AI. 80% also reported measurable improvements in service delivery, customer satisfaction, and contact center performance. A report from RobertHalf on the Future of Work revealed that 39% of IT leaders are currently using AI or machine learning, and 33% indicated that they plan to use AI within the next three years.

Chatbots and AI Have Evolved

The chatbots we have all interacted with on websites are just one example of “old school” chatbots. Another example is the Interactive call routing Voice Response systems we have all been forced to use when we call a doctor’s office, or other business with several departments. Most of us do whatever we have to in order to route the call to a real person. We do that because those types of chatbots are largely inefficient, tedious, repetitive, and slow.

Those legacy chatbots are only useful for getting basic, predictable information out, such as the hours a business is open, an address, or a website domain. Anything beyond that is almost painful for the user to go through.

Conversational AI chatbots have the ability to be predictive and highly personalized, with more complex, fluid responses that are very similar to human decision-making. Aside from having access to a customer’s previous interactions with a brand through a CRM or CDP, conversational AI is able to observe user-specific traits (location, age, mood, gender), learn conversational styles from past conversations, and take actions using tools such as Robotic Process Automation (RPA).

According to Chris Radanovic, a conversational AI expert at LivePerson, conversational AI can help consumers connect with brands directly in the channels they use the most. “Intelligent virtual concierges and bots instantly greet them, answer their questions and carry out transactions, and if needed, connect them to agents with all of the contextual data they’ve collected over the course of the conversation,” he said.

Conversational AI is a key for many brands who wish to improve the customer experience. Radanovic explained that consumers and brands are embracing conversational AI because it can be used to provide personalized experiences that are quicker and more convenient than traditional ways of interacting with brands. ”Think waiting on hold for a phone call or clicking through tons of pages to find the right info. Along with a more personalized experience, AI can also help to eliminate the pain points in the customer journey.”

Types of Conversational AI

Conversational AI is typically used two ways: actively and passively. It is used actively during communications between humans and machines, and passively when it observes communications between one human and another.

Digital personal assistants, such as Alexa, Siri, and Google Assistant are an example of the active use of conversational AI. Digital customer assistants are another example of active conversational AI, and they can be found on business websites, built into apps, and used for ordering food or responding to customer service tickets. Finally, digital employee assistants allow employees to quickly access customer information during customer service calls, and are also used to obtain vital information during conferences and meetings, or perform tasks that would otherwise require interaction with another employee.

The sheer speed at which conversational AI and machine learning are capable of operating at is very effective at making decisions based on actionable data, said Erik Duffield, GM of Deloitte Digital’s Experience Managements Practice. He thinks the competitive ground in digital experiences has moved to a massive number of small decisions and interactions. “We are now seeing digital experiences shift from human to machine interactions, with AI and NLP enabling companies to execute their strategies at the speed and volume required to deliver the experiences that are expected by customers,” said Duffield.

Some great examples of conversational AI and what it could eventually become can be found in science-fiction. Though we are still decades away from HAL in the movie 2001: A Space Odyssey, or Jarvis from Iron Man, it is conversational AI that makes those characters so believable.

Conversational AI Uses Predictive Analytics To Make Decisions

Predictive analytics is defined as the use of data, statistical algorithms and machine learning to discover the likelihood of future outcomes using historical data and statistical modeling. Conversational AI uses predictive analytics to determine the next “best step” in the customer or employee journey. Aside from using AI and predictive analytics for responding to humans, it is also used for fraud detection, managing resources, and reducing risk.

Hospitality industry players such as restaurants and hotels are able to use predictive analytics to determine the number of guests on any given night, which allows them to maximize occupancy and ROI. Retailers are able to use predictive analytics to forecast inventory requirements, configure the store layout to maximize sales, and manage shipping. By analyzing past travel trends, airlines are able to more appropriately set ticket prices.

“Machine learning programs and projects have existed in organizations for a number of years now, advancing from statistics and analytics toward data science. Out of that proving ground emerged a future goal (or requirement) that companies will not only have a few ML models, but dozens, if not hundreds, operationalized and embedded into consumer experiences,” said Duffield. “This shift is leading to a new class of technology known as MLOps, which appears to be following the same maturity path as application development: Continuous Integration/Continuous Deployment, deployment automation, testing automation, etc. Leaders will build capabilities in this space and machine driven decisions will be automated, validated, tested and measured.”

How Does Conversational AI Work?
There are several components that enable conversational AI to have human-like conversations through voice or chat:
  • Automatic Speech Recognition (ASR)
  • Natural Language Understanding (NLU)
  • Dialog Management Natural Language Generation (NLG)
  • Text to Speech (TTS)
Although AI enables applications to quickly make decisions based on actionable insights gathered from data, there are several steps involved in the process. The initial step occurs when the AI application receives the data from a human through either text or voice input. By using Automatic Speech Recognition (ASR), the AI application is able to understand spoken words and translate them into text.

Next, the AI app has to determine what that text means by using Natural Language Understanding (NLU). The next part of the process is when the AI app formulates a response to the text. This is accomplished using Dialog Management, which creates a response that is understandable by using Natural Language Generation (NLG). The response may be delivered as text, like an AI chatbot would do, or voice (using Text to Speech), such as Alexa would do.

Finally, the AI app uses machine learning to accept corrections and learn from each experience, which enables it to produce better and more accurate responses in the future.

The Challenges of Conversational AI

Conversational AI applications rely on conversation data and are typically trained through the use of a Maximum Likelihood Estimation (MLE) objective and/or Reinforcement Learning (RL). Retraining is often required, even if there has only been a small change in the conversation. Data preparation and training can become an expensive endeavor. Additionally, conversation responses are based around business logic, much of which is industry specific and challenging to describe. Decoding such logic using text data alone as input is practically impossible at this point.

With voice-based conversational AI, there are many other challenges that come into play. When people are speaking to one another, voice itself is only one way that they are communicating. Fluctuations in tone, hesitation, and volume can be detected by AI and interpreted appropriately. Other non-verbal cues, such as facial expression, eye movements, and hand gestures, are impossible for AI to detect unless the medium is video. This increases the importance of voice interpretation immensely.

Other challenges include the varying degree of knowledge of each person that is communicating with the AI application. Children are limited in their level of knowledge, and have to be spoken to in an age appropriate manner. Adults with different levels of education or experience in a given industry also must be “spoken” to with a response that is likely to be understood. There are also differences when it comes to location, language, sentiment, etc. Feelings and sarcasm are also difficult for AI to interpret. Voice input also has the added challenge of background noises and dialects to deal with. Another issue is that often, when one person is speaking, there are others in the area who are also talking simultaneously, which then requires the AI application to differentiate identical voices from one other.

Additionally, complexity often becomes a pain point in the customer journey, and is a valid reason why the customer experience is less than exceptional. Tasks such as purchasing an item online take more time because there are so many options, as well as opportunities to compare other items before completing a purchase. Exceptional customer experiences can only occur when complex information is presented in a simplified, easy to use, uncomplicated manner. There is no “one size fits all” solution when it comes to conversational AI.

Final Thoughts

Conversational AI enables people to use natural language to communicate with machines. It’s being used in the call center, in chatbots for customer and employee queries, in kiosks, in automobiles, and in digital personal assistants, all of which are now able to have personalized, highly specific conversations that are all but indistinguishable from human conversations.

google.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/16/2021 5:49:31 PM
1 Recommendation   of 15602
 
Shall we play a game? How video games transformed AI

This monthly podcast series looks at the people and stories behind game-changing ideas and innovations

economist.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/16/2021 6:48:43 PM
   of 15602
 
Deepmind Introduces PonderNet, A New AI Algorithm That Allows Artificial Neural Networks To Learn To “Think For A While” Before Answering

By
Asif Razzaq

August 16, 2021



Source: arxiv.org

Deepmind introduces PonderNet, a new algorithm that allows artificial neural networks to learn to think for a while before answering. This improves the ability of these neural networks to generalize outside of their training distribution and answer tough questions with more confidence than ever before.

The time required to solve a problem is not just influenced by the size of inputs but also the complexity. Also, the amount of computation used in standard neural networks is not proportional to the complexity, but rather it’s proportional with size. To address this issue, Deepmind, in its latest research, presents PonderNet, which builds on Adaptive Computation Time (ACT; Graves, 2016) and other adaptive networks.

PonderNet is fully differentiable and can leverage low-variance gradient estimates (unlike REINFORCE). It has unbiased gradient estimates, unlike ACT. It achieves this by reformulating the halting policy as a probabilistic model.

PonderNet is a neural network algorithm that incentivizes exploration over pondering time to improve the accuracy of predictions. Deepmind researchers used PonderNet on their parity task and demonstrated how it can increase computation when extrapolating beyond seen data during training, achieving higher accuracy in complex domains such as question answering and multi-step reasoning.

Paper: arxiv.org

google.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/16/2021 9:25:33 PM
   of 15602
 
Inside Google’s DeepMind

Aug 14, 2021

8 minute video documentary


Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/18/2021 5:23:44 PM
   of 15602
 
Waymo Is 99% of the Way to Self-Driving Cars. The Last 1% Is the Hardest

The world’s most famous autonomous car shop has lost its CEO and is still getting stymied by traffic cones. What’s taking so long?

By Gabrielle and Mark Bergen
August 17, 2021, 5:00 AM CDT
Updated on August 17, 2021, 11:36 AM CDT

Joel Johnson laughs nervously from the backseat when his self-driving taxi stops in the middle of a busy road in suburban Phoenix. The car, operated by autonomous vehicle pioneer Waymo, has encountered a row of traffic cones in a construction zone, and won’t move. “Go around, man,” Johnson says as he gestures to the drivers honking behind him.

After the vehicle has spent 14 mostly motionless minutes obstructing traffic, a Waymo technician tries to approach—but the car unexpectedly rolls forward, away from him. “It definitely seemed like a dangerous situation,” Johnson recalls.

Incidents like this one, which Johnson posted to his YouTube channel in May, are embarrassing for Waymo—a company that’s having its own problems moving forward. A unit of Alphabet Inc., Waymo hasn’t expanded its robo-taxi service beyond Phoenix after years of careful testing. The company has floated moves into other areas—trucking, logistics, personal vehicles—but the businesses are in early stages. And its production process for adding cars to its driverless fleet has been painfully slow.



The interior of a fully autonomous—and driverless—Waymo ride-hailing car.
Photographer: Hugh Mitton/Alamy
-------------------------------
This spring, Waymo saw a mass exodus of top talent. That included its chief executive officer, chief financial officer, and the heads of trucking product, manufacturing, and automotive partnerships. People familiar with the departures say some executives felt frustrated about the sluggish pace of progress at the enterprise.

Despite years of research and billions of dollars invested, the technology behind self-driving cars still has flaws. Not long ago, a glorious future of autonomous vehicles from Waymo and its many competitors seemed close at hand. Now, “what people are realizing is that the work ahead is really hard,” says Tim Papandreou, a former employee and transit consultant.

Waymo, by most measures, is still the leader of the world’s autonomous vehicle effort. Development of its technology began at Google more than a decade ago, and the company hit a historic milestone last year when it started its completely driverless taxi program in Arizona. During the pandemic, many rivals gave up on self-driving (Uber Technologies Inc.) or sold themselves to rivals (Zoox, which was acquired by Amazon.com Inc.). Waymo kept going, raising $5.7 billion from outside investors since last summer, adding to the untold billions Alphabet has already spent.

Waymo points to its remarkable track record vs. those of its rivals. Since last fall, the company says it’s provided “tens of thousands” of rides without a driver present in Arizona. “We consider that to be a huge accomplishment,” a Waymo spokesperson said in a statement. “In fact, the absence of any other such fully autonomous commercial offering is a demonstration of how hard it is to achieve this feat.”

But the company’s remaining competitors have also started to hit milestones. Argo AI, backed by Ford Motor Co. and Volkswagen AG, will start charging for robot rides in Miami and Austin later this year—albeit with a human minder behind the wheel. Zoox and Cruise, which is funded by General Motors, Honda, and SoftBank, have begun testing autonomous vehicles without a safety driver on public roads in San Francisco. While none of these companies has yet turned a profit on self-driving tech, they’re all directing billions of dollars toward erasing Waymo’s early lead.

Waymo separated from Google’s research lab in 2016 to become the latest subsidiary of Alphabet, and went on a hiring spree, recruiting personnel to cut business deals with automakers, draft financial models, lobby state houses, and market its technology. At the time, many Waymonauts—as employees call themselves—believed the machinery was in place for fully driverless cars to hit public roads imminently.

In 2017, the year Waymo launched self-driving rides with a backup human driver in Phoenix, one person hired at the company was told its robot fleets would expand to nine cities within 18 months. Staff often discussed having solved “99% of the problem” of driverless cars. “We all assumed it was ready,” says another ex-Waymonaut. “We’d just flip a switch and turn it on.”

But it turns out that last 1% has been a killer.
Small disturbances like construction crews, bicyclists, left turns, and pedestrians remain headaches for computer drivers. Each city poses new, unique challenges, and right now, no driverless car from any company can gracefully handle rain, sleet, or snow. Until these last few details are worked out, widespread commercialization of fully autonomous vehicles is all but impossible.

“We got to the moon, and it’s like, now what?” says Mike Ramsey, a Gartner analyst in Detroit and longtime industry spectator. “We stick a flag in it, grab some rocks, but now what? We can’t do anything with this moon.”

At first, it appeared that Waymo would produce cars at a supercharged pace. In 2018, Waymo signed up to turn as many as 20,000 Jaguar SUVs into Waymo autonomous vehicles. Months later, it said it would expand its fleet of Chrysler Pacifica minivans to more than 60,000. Waymo planned to buy the cars and install what it called its “Driver”—a suite of cameras, sensors, and proprietary computer gear.

"There’s not a lot in assembly,” then-CEO John Krafcik, a former auto executive, declared at an event that year.

In reality, skilled disassemb2018.d. Engineers must take apart the cars and put them back together by hand. One misplaced wire can leave engineers puzzling for days over where the problem is, according to a person familiar with the operations who describes the system as cumbersome and prone to quality problems. Like others who spoke candidly about the company, the former employee asked not to be identified for fear of retaliation.

The painstaking nature of the process has left Waymo without a viable path to mass production, the person says. Waymo has slashed parts orders on the Chrysler minivan project and has had far fewer Jaguars delivered than initially expected, according to people familiar with the automakers’ plans.

The Waymo spokesperson says the company is not supply-constrained in Detroit, and that it’s on track to hit all its internal production targets with Jaguar, but declines to share details. The company also disputes that it’s fallen behind schedule on constructing its Chrysler vehicles, noting that these agreements are “fluid and subject to change.”

Waymo’s competitors in Detroit already have vast manufacturing capabilities. Argo and Cruise, for example, plan to build their driverless cars from the ground up. Insiders generally believe that Waymo is the leader on technology, but manufacturing capacity could give Detroiters the advantage when it comes to rolling out fleets, according to Ramsey, the Gartner analyst. “I don’t know what their current number is,” he says of Waymo’s production. “But it hasn’t moved much.”

In 2019, Waymo rented a warehouse in Detroit to be, as Krafcik said at the time, “the world’s first dedicated autonomous plant.” Michigan officials agreed to give the company an $8 million grant partly in exchange for creating at least 100 jobs in the state. As of last fall, Waymo had hired 22 people to work at the facility, according to state filings. The company says it’s exceeded the 100-person job-creation pledge in the state, and would not comment on the headcount of specific offices. Earlier this year, Waymo was trying to produce 5 to 10 vehicles a day at the factory, says one former employee. The company disputes this claim.

After years of publicly touting the wonders of self-driving, Waymo personnel started talking in recent years about managing people’s expectations of what their cars could do, and when. Several people who worked at Waymo describe parent company Alphabet as extremely cautious, particularly after an Uber self-driving test vehicle struck and killed a pedestrian in Arizona in 2018.

For example, Waymo’s ad hoc board shot down a splashy marketing pitch from Krafcik, according to three people familiar with the decision. In 2018, he wanted to stage a multicity demonstration of the company’s technology, with pop-up marketing installations showcasing what Waymo could do. Tesla Inc. had carried out something similar with its early models. But the company’s board—which consisted of Google’s founders Larry Page and Sergey Brin, along with Alphabet honchos and a few outside investors—worried about repeating the failures of Google Glass, the flopped augmented-reality spectacles, by introducing a product before it was ready. A Waymo spokesperson said that the company simply went in a different direction.

Krafcik left the company in April. The new co-CEOs are Tekedra Mawakana, formerly Waymo’s chief operating officer, and Dmitri Dolgov, who had been its chief technology officer. The pair met with backers and partners this spring as Waymo closed its financing round. According to one investor, the new chiefs were upbeat in a recent meeting, saying that with the pandemic fading the company was gearing up to make “huge headway” on its goals.

Meanwhile, in Phoenix, even after his traffic cone incident, YouTuber Joel Johnson was still enthusiastic about the technology. “It seems to handle pretty much everything that I try and throw at it,” he says. In other words, it works 99% of the time.

Story Link

Share RecommendKeepReplyMark as Last ReadRead Replies (1)
Previous 10 Next 10