SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksAlphabet Inc. (Google)


Previous 10 Next 10 
From: Frank Sully9/1/2021 5:38:39 PM
1 Recommendation   of 15580
 
Google developing own CPUs for Chromebook laptops

US software giant ramps up hiring blitz for semiconductor and hardware ambitions



Google is joining the ranks of global tech companies racing to develop in-house chips. (Source photo by AP)

CHENG TING-FANG, Nikkei staff writer
September 1, 2021 12:00 JST

TAIPEI -- Google is developing its own central processors for its notebook and tablet computers, the latest sign that major tech players see in-house chip development as key to their competitiveness.

The U.S. internet giant plans to roll out the CPUs for laptops and tablets, which run on the company's Chrome operating system, in around 2023, three sources with knowledge of the matter told Nikkei Asia.

Google is also ramping up its efforts to build mobile processors for its Pixel smartphones and other devices after announcing it will use in-house processor chips for the first time in its upcoming Pixel 6 series, they said.

Google's growing focus on developing its own chips comes as global rivals pursue a similar strategy to differentiate their offerings. Amazon, Facebook, Microsoft, Tesla, Baidu and Alibaba Group Holding are all racing to build their own semiconductors to power their cloud services and electronic products.

Google was particularly inspired by Apple's success in developing its own key semiconductor components for iPhones as well as last year's announcement that it would replace Intel CPUs with its own offerings for Mac computers and laptops, two people familiar with Google's thinking told Nikkei Asia.



The new CPUs and the mobile processors that Google is developing are based on the chip blueprints of Arm, the Softbank-controlled U.K. chip company whose intellectual property is used in more than 90% of the world's mobile devices.

Separately, the company has high hopes for the Pixel 6 range and has asked suppliers to prepare 50% more production capacity for the handsets compared with the pre-pandemic level in 2019, two people told Nikkei Asia. Google shipped more than 7 million Pixel phones in 2019, its highest figure ever, but shipped just 3.7 million phones the following year as COVID ravaged the world, according to research company IDC.

Google told several suppliers in recent meetings that it sees potential for massive growth opportunities in the global market because it is the only U.S. smartphone maker building handsets using the Android operating system.

Regarding chip development, experts say Google's strategy is a logical move but not without challenges.

"We found that all the tech titans are joining the foray to building their custom chips because in that way they could program their own features into those chips that could meet its specific needs," Eric Tseng, chief analyst with Isaiah Research, told Nikkei Asia. "In that case, these tech companies could easily adjust R&D workloads without being restricted by their suppliers and offer unique services or technologies. In an ideal scenario, using one's own chips also means better software and hardware integration."

However, building chips requires massive investment and long-term commitments, and all these new tech players building their own chips also need to fight for production capacity with existing top chip developers from Intel, Nvidia, Qualcomm and others, Tseng said.

Peter Hanbury, a partner at consulting firm Bain & Co., told Nikkei Asia that the cost of designing a cutting-edge 5-nm chip is now around $500 million, compared to about $50 million to develop a chip using more mature production technologies, such as 28-nm tech. "Very few players have the skills or financial resources to design their own chips, so the typical players considering this path tend to be extremely large players, like the cloud service providers, or have very valuable applications for these specially designed chips."

Google started to build its own silicon -- dubbed tensor processing units (TPUs) -- to facilitate its workloads for artificial intelligence computing for its data center cloud servers in 2016. It unveiled the fourth generation of TPUs this May. It is hiring chip engineers around the world, including in Israel, India and Taiwan -- all key tech economies -- and at home in the U.S., according to supply chain executives, employees and the company's job postings. Google has already hired chip talent from its key suppliers including Intel, Qualcomm and Mediatek, according to sources and a Nikkei Asia analysis of LinkedIn profiles.

Google is one of the world's most important developers of operating systems. Most of the world's top smartphone makers, including Samsung, Xiaomi, Oppo and Vivo, use the Android OS for their handsets. Google has also has licensed its Chrome OS to HP, Dell, Acer, AsusTek, Lenovo and Samsung to build Chromebooks, lightweight laptops mainly targeted toward the education market.

Google introduced Pixelbook and Pixel Slate, its own notebooks and tablets running Chrome OS, in 2017 and 2018, respectively, but annual shipments were less than half a million units, according to IDC data.

Global shipments of Chromebooks, meanwhile, nearly doubled last year thanks to the boom in remote learning spurred by the pandemic. Shipments continued to grow for the first half of 2021, through momentum has slowed sharply since July.

Google declined to comment beyond confirming its earlier announcement that it will use the Tensor mobile processors for its upcoming Pixel 6 handsets.

asia.nikkei.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/2/2021 3:24:41 PM
1 Recommendation   of 15580
 
Global AI And AI Chips Markets To Grow At CAGR Near 40%

See Message 33470268

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/2/2021 5:14:38 PM
   of 15580
 
In the future I will post only Google-related AI and AI-chip related material here. For info on other companies refer to the AI, Robotics and Automation board, linked below:

Subject 59856

Cheers,
Frank

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/2/2021 6:35:02 PM
   of 15580
 
Discovering Anomalous Data with Self-Supervised Learning

Thursday, September 2, 2021

Posted by Kihyuk Sohn and Chun-Liang Li, Research Scientists, Google Cloud

Anomaly detection (sometimes called outlier detection or out-of-distribution detection) is one of the most common machine learning applications across many domains, from defect detection in manufacturing to fraudulent transaction detection in finance. It is most often used when it is easy to collect a large amount of known-normal examples but where anomalous data is rare and difficult to find. As such, one-class classification, such as one-class support vector machine (OC-SVM) or support vector data description (SVDD), is particularly relevant to anomaly detection because it assumes the training data are all normal examples, and aims to identify whether an example belongs to the same distribution as the training data. Unfortunately, these classical algorithms do not benefit from the representation learning that makes machine learning so powerful. On the other hand, substantial progress has been made in learning visual representations from unlabeled data via self-supervised learning, including rotation prediction and contrastive learning. As such, combining one-class classifiers with these recent successes in deep representation learning is an under-explored opportunity for the detection of anomalous data.

In “ Learning and Evaluating Representations for Deep One-class Classification”, presented at ICLR 2021, we outline a 2-stage framework that makes use of recent progress on self-supervised representation learning and classic one-class algorithms. The algorithm is simple to train and results in state-of-the-art performance on various benchmarks, including CIFAR, f-MNIST, Cat vs Dog and CelebA. We then follow up on this in “ CutPaste: Self-Supervised Learning for Anomaly Detection and Localization”, presented at CVPR 2021, in which we propose a new representation learning algorithm under the same framework for a realistic industrial defect detection problem. The framework achieves a new state-of-the-art on the MVTec benchmark.

A Two-Stage Framework for Deep One-Class Classification
While end-to-end learning has demonstrated success in many machine learning problems, including deep learning algorithm designs, such an approach for deep one-class classifiers often suffer from degeneration in which the model outputs the same results regardless of the input.

To combat this, we apply a two stage framework. In the first stage, the model learns deep representations with self-supervision. In the second stage, we adopt one-class classification algorithms, such as OC-SVM or kernel density estimator, using the learned representations from the first stage. This 2-stage algorithm is not only robust to degeneration, but also enables one to build more accurate one-class classifiers. Furthermore, the framework is not limited to specific representation learning and one-class classification algorithms — that is, one can easily plug-and-play different algorithms, which is useful if any advanced approaches are developed.

A deep neural network is trained to generate the representations of input images via self-supervision. We then train one-class classifiers on the learned representations.
Semantic Anomaly Detection
We test the efficacy of our 2-stage framework for anomaly detection by experimenting with two representative self-supervised representation learning algorithms, rotation prediction and contrastive learning.

Rotation prediction refers to a model’s ability to predict the rotated angles of an input image. Due to its promising performance in other computer vision applications, the end-to-end trained rotation prediction network has been widely adopted for one-class classification research. The existing approach typically reuses the built-in rotation prediction classifier for learning representations to conduct anomaly detection, which is suboptimal because those built-in classifiers are not trained for one-class classification.

In contrastive learning, a model learns to pull together representations from transformed versions of the same image, while pushing representations of different images away. During training, as images are drawn from the dataset, each is transformed twice with simple augmentations (e.g., random cropping or color changing). We minimize the distance of the representations from the same image to encourage consistency and maximize the distance between different images. However, usual contrastive learning converges to a solution where all the representations of normal examples are uniformly spread out on a sphere. This is problematic because most of the one-class algorithms determine the outliers by checking the proximity of a tested example to the normal training examples, but when all the normal examples are uniformly distributed in an entire space, outliers will always appear close to some normal examples.

To resolve this, we propose distribution augmentation (DA) for one-class contrastive learning. The idea is that instead of learning representations from the training data only, the model learns from the union of the training data plus augmented training examples, where the augmented examples are considered to be different from the original training data. We employ geometric transformations, such as rotation or horizontal flip, for distribution augmentation. With DA, the training data is no longer uniformly distributed in the representation space because some areas are occupied by the augmented data.

Left: Illustrated examples of perfect uniformity from the standard contrastive learning. Right: The reduced uniformity by the proposed distribution augmentation (DA), where the augmented data occupy the space to avoid the uniform distribution of the inlier examples (blue) throughout the whole sphere.
We evaluate the performance of one-class classification in terms of the area under receiver operating characteristic curve (AUC) on the commonly used datasets in computer vision, including CIFAR10 and CIFAR-100, Fashion MNIST, and Cat vs Dog. Images from one class are given as inliers and those from remaining classes are given as outliers. For example, we see how well cat images are detected as anomalies when dog images are inliers.

CIFAR-10 CIFAR-100 f-MNIST Cat v.s. Dog
Ruff et al. (2018)64.8---
Golan and El-Yaniv (2018)86.078.793.588.8
Bergman and Hoshen (2020)88.2-94.1-
Hendrycks et al. (2019)90.1---
Huang et al. (2019)86.678.893.9-
2-stage framework: rotation prediction 91.3±0.384.1±0.695.8±0.386.4±0.6
2-stage framework: contrastive (DA)92.5±0.686.5±0.794.8±0.389.6±0.5
Performance comparison of one-class classification methods. Values are the mean AUCs and their standard deviation over 5 runs. AUC ranges from 0 to 100, where 100 is perfect detection.
Given the suboptimal built-in rotation prediction classifiers typically used for rotation prediction approaches, it’s notable that simply replacing the built-in rotation classifier used in the first stage for learning representations with a one-class classifier at the second stage of the proposed framework significantly boosts the performance, from 86 to 91.3 AUC. More generally, the 2-stage framework achieves state-of-the-art performance on all of the above benchmarks.

With classic OC-SVM, which learns the area boundary of representations of normal examples, the 2-stage framework results in higher performance than existing works as measured by image-level AUC.

Texture Anomaly Detection for Industrial Defect Detection
In many real-world applications of anomaly detection, the anomaly is often defined by localized defects instead of entirely different semantics (i.e., being different in general). For example, the detection of texture anomalies is useful for detecting various kinds of industrial defects.

The examples of semantic anomaly detection and defect detection. In semantic anomaly detection, the inlier and outlier are different in general, (e.g., one is a dog, the other a cat). In defect detection, the semantics for inlier and outlier are the same (e.g., they are both tiles), but the outlier has a local anomaly.
While learning representations with rotation prediction and distribution-augmented contrastive learning have demonstrated state-of-the-art performance on semantic anomaly detection, those algorithms do not perform well on texture anomaly detection. Instead, we explored different representation learning algorithms that better fit the application.

In our second paper, we propose a new self-supervised learning algorithm for texture anomaly detection. The overall anomaly detection follows the 2-stage framework, but the first stage, in which the model learns deep image representations, is specifically trained to predict whether the image is augmented via a simple CutPaste data augmentation. The idea of CutPaste augmentation is simple — a given image is augmented by randomly cutting a local patch and pasting it back to a different location of the same image. Learning to distinguish normal examples from CutPaste-augmented examples encourages representations to be sensitive to local irregularity of an image.

The illustration of learning representations by predicting CutPaste augmentations. Given an example, the CutPaste augmentation crops a local patch, then pasties it to a randomly selected area of the same image. We then train a binary classifier to distinguish the original image and the CutPaste augmented image.
We use MVTec, a real-world defect detection dataset with 15 object categories, to evaluate the approach above.

DOCC
(Ruff et al., 2020)
U-Student
(Bergmann et al., 2020)
Rotation Prediction Contrastive (DA) CutPaste
87.992.586.386.595.2
Image-level anomaly detection performance (in AUC) on the MVTec benchmark.
Besides image-level anomaly detection, we use the CutPaste method to locate where the anomaly is, i.e., “patch-level” anomaly detection. We aggregate the patch anomaly scores via upsampling with Gaussian smoothing and visualize them in heatmaps that show where the anomaly is. Interestingly, this provides decently improved localization of anomalies. The below table shows the pixel-level AUC for localization evaluation.

Autoencoder
(Bergmann et al., 2019)
FCDD
(Ruff et al., 2020)
Rotation Prediction Contrastive (DA) CutPaste
86.092.093.090.496.0
Pixel-level anomaly localization performance (in AUC) comparison between different algorithms on the MVTec benchmark.
Conclusion
In this work we introduce a novel 2-stage deep one-class classification framework and emphasize the importance of decoupling building classifiers from learning representations so that the classifier can be consistent with the target task, one-class classification. Moreover, this approach permits applications of various self-supervised representation learning methods, attaining state-of-the-art performance on various applications of visual one-class classification from semantic anomaly to texture defect detection. We are extending our efforts to build more realistic anomaly detection methods under the scenario where training data is truly unlabeled.

Acknowledgements
We gratefully acknowledge the contribution from other co-authors, including Jinsung Yoon, Minho Jin and Tomas Pfister. We release the code in our GitHub repository.

ai.googleblog.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen9/10/2021 11:53:40 AM
2 Recommendations   of 15580
 
AAPL related, but ultimately applicable to GGOGL:

Apple can no longer force developers to use in-app purchasing, judge rules in Epic Games case

PUBLISHED FRI, SEP 10 202111:36 AM EDT
UPDATED 4 MIN AGO
Kif Leswing @KIFLESWING
CNBC.com

KEY POINTS

-- Judge Yvonne Gonzalez Rogers handed down a decision in a closely-watched trial between Apple and Epic Games on Friday.

-- Rogers issued an injunction that said that Apple will no longer be allowed to prohibit developers from providing links or other communications that direct users away from Apple in-app purchasing.

-- Apple won on 9 of 10 counts but will be forced to change its App Store policies and loosen its grip over in-app purchases.



Tim Cook, chief executive officer of Apple Inc., center, arrives at U.S. district court in Oakland, California, on Friday, May 21, 2021./ Nina Riggio | Bloomberg | Getty Images
---------------------------

Judge Yvonne Gonzalez Rogers handed down a decision in a closely-watched trial between Apple and Epic Games on Friday.

Rogers ordered an injunction that said that Apple will no longer be allowed to prohibit developers from providing links or other communications that direct users away from Apple in-app purchasing, of which it takes 15% to 30%. The injunction addresses a longstanding developer complaint.

The decision concludes the first part of the battle between the two companies over Apple’s App Store policies and whether they stifle competition. Apple won on 9 of 10 counts but will be forced to change its App Store policies and loosen its grip over in-app purchases.

The trial took place in Oakland, California in May, and included both company CEOs testifying in open court. People familiar with the trial previously told CNBC that both sides expected the decision to be appealed regardless of what it was.

Since the trial ended but before the decision was handed down, Apple has made several changes to mollify critics, some as part of settlements with other app developers, including relaxing some rules about emailing customers to encourage them to make off-app purchases and allowing some links in apps.

Epic Games is among the most prominent companies to challenge Apple’s control of its iPhone App Store, which has strict rules about what is allowed and not, and requires many software developers to use in in-app payment system, which takes between 15% to 30% of each transaction.

Epic’s most popular game is Fortnite, which makes money when players buy V-bucks, or the in-game currency to buy costumes and other cosmetic changes.

Epic wasn’t seeking money from Apple— instead, it wanted to be allowed to install its own app store on iPhones, which would allow it to bypass Apple’s cut, and impose its own fees on games it distributed. Epic Games CEO Tim Sweeney had chafed against Apple’s in-app purchase rules as early as 2015, according to court filings and exhibits.



Apple CEO Tim Cook is cross examined by Gary Bornstein as he testifies on the stand during a weeks-long antitrust trial at federal court in Oakland, California, U.S. May 21, 2021 in this courtroom sketch. / Vicki Behringer | Reuters
--------------------------------------

But the public clash between the two companies started in earnest in August 2020, when Epic implemented a plan to challenge Apple called “Project Liberty,” according to court filings.

Epic Games updated Fortnite on its servers to reduce the price of its in-game currency by 20% if players bought directly from the company, bypassing Apple’s take, and violating Apple’s rules on steering users away from its in-app payments.

Apple removed Fortnite from the App Store, meaning that new users could not download it and that it would eventually stop working on iPhones because the app could not be updated. As it planned, Epic then filed a lawsuit that culminated in May’s trial.

At the trial, Apple CEO Tim Cook testified on one of the last days, and faced pointed questioning from Judge Rogers over its restrictions on steering users to make purchases off-app,

“It doesn’t seem to me that you feel any pressure or competition to actually change the manner in which you act to address the concerns of developers,” Rogers said.

Epic Games also sued Google over its control of the Play Store for Android phones. That case has not yet gone to trial.

Epic Games v. Apple: Judge reaches decision (cnbc.com)

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/20/2021 9:13:10 PM
1 Recommendation   of 15580
 
DeepMind tells Google it has no idea how to make AI less toxic

To be fair, neither does any other lab



Opening the black box. Reducing the massive power consumption it takes to train deep learning models. Unlocking the secret to sentience. These are among the loftiest outstanding problems in artificial intelligence. Whoever has the talent and budget to solve them will be handsomely rewarded with gobs and gobs of money.

But there’s an even greater challenge stymieing the machine learning community, and it’s starting to make the world’s smartest developers look a bit silly. We can’t get the machines to stop being racist, xenophobic, bigoted, and misogynistic.

Nearly every big tech outfit and several billion-dollar non-profits are heavily invested in solving AI‘s toxicity problem. And, according to the latest study on the subject, we’re not really getting anywhere.

The prob: Text generators, such as OpenAI’s GPT-3, are toxic. Currently, OpenAI has to limit usage when it comes to GPT-3 because, without myriad filters in place, it’s almost certain to generate offensive text.

In essence, numerous researchers have learned that text generators trained on unmitigated datasets (such as those containing conversations from Reddit) tend towards bigotry.

It’s pretty easy to reckon why: because a massive percentage of human discourse on the internet is biased with bigotry towards minority groups.

Background: It didn’t seem like toxicity was going to be an insurmountable problem back when deep learning exploded in 2014.

We all remember that time Google’s AI mistook a turtle for a gun right? That’s very unlikely to happen now. Computer vision’s gotten much better in the interim.

But progress has been less forthcoming in the field of NLP (natural language processing).

Simply put, the only way to stop a system such as GPT-3 from spewing out toxic language is to block it from doing so. But this solution has its own problems.

What’s new: DeepMind, the creators of AlphaGo and a Google sister company under the Alphabet umbrella, recently conducted a study of state-of-the-art toxicity interventions for NLP agents.

The results were discouraging.

Per a preprint paper from the DeepMind research team:
  • We demonstrate that while basic intervention strategies can effectively optimize previously established automatic metrics on the REALTOXICITYPROMPTS dataset, this comes at the cost of reduced LM (language model) coverage for both texts about, and dialects of, marginalized groups.
  • Additionally, we find that human raters often disagree with high automatic toxicity scores after strong toxicity reduction interventions — highlighting further the nuances involved in careful evaluation of LM toxicity.
The researchers ran the intervention paradigms through their paces and compared their efficacy with that of human evaluators.

A group of paid study participants evaluated text generated by state-of-the-art text generators and rated its output for toxicity. When the researchers compared the human’s assessment to the machine’s, they found a large discrepancy.

AI may have a superhuman ability to generate toxic language but, like most bigots, it has no clue what the heck it’s talking about. Intervention techniques failed to accurately identify toxic output with the same accuracy as humans.

Quick take: This is a big deal. Text generators are poised to become ubiquitous in the business world. But if we can’t make them non-offensive, they can’t be deployed.

Right now, a text-generator that can’t tell the difference between a phrase such as “gay people exist” and “gay people shouldn’t exist,” isn’t very useful. Especially when the current solution to keeping it from generating text like the latter is to block it from using any language related to the LGBTQ+ community.

Blocking references to minorities as a method to solve toxic language is the NLP equivalent of a sign that says “for use by straight whites only.”

The scary part is that DeepMind, one of the world’s most talented AI labs, conducted this study and then forwarded the results to Jigsaw. That’s Google’s crack problem-solving team. It’s been unsuccessfully trying to solve this problem since 2016.

The near-future doesn’t look bright for NLP.

You can read the whole paper here.

Published September 20, 2021 - 9:58 pm UTC

google.com





Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/22/2021 11:01:28 AM
   of 15580
 
NVIDIA CEO Jensen Huang Special Address | NVIDIA Cambridge-1 Inauguration


Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen9/26/2021 11:38:35 AM
   of 15580
 
Google is slashing the amount it keeps from sales on its cloud marketplace as pressure mounts on app stores

PUBLISHED SUN, SEP 26 20218:35 AM EDT
Jordan Novet @JORDANNOVET
CNBC.com

KEY POINTS

-- Google is matching Microsoft’s revenue share terms for purchases of third-party software through cloud marketplaces.

-- The change could bring attract more cloud business, which could help Google further reduce dependence on advertising.



Thomas Kurian, chief executive officer of cloud services at Google LLC, right, speaks as Alpna Doshi, group chief information officer of Philips, listens during the Google Cloud Next ’19 event in San Francisco, California, U.S., on Tuesday, April 9, 2019. The conference brings together industry experts to discuss the future of cloud computing. / Michael Short | Bloomberg | Getty Images
-------------------------

Google is reducing the amount of revenue it keeps when customers buy software from other vendors on its cloud marketplace, as the top tech companies face increasing pressure to lower their so-called take rates.

The Google Cloud Platform is cutting its percentage revenue share to 3% from 20%, according to a person familiar with the matter who asked not be named in order to talk about internal policies.


It’s the cloud group’s latest effort to become more competitive since Thomas Kurian joined as CEO in 2019 after a career at Oracle. Google, which trails Amazon Web Services and Microsoft Azure in cloud infrastructure, is trying to attract independent software makers to sell their products on Google’s cloud.

“Our goal is to provide partners with the best platform and most competitive incentives in the industry,” a Google spokesperson told CNBC in an email. “We can confirm that a change to our Marketplace fee structure is in the works, and we’ll have more to share on this soon.”

Big Tech companies in recent months have been decreasing the amount of money they retain on their platforms, whether it’s for consumer apps or business products. Some of the pressure is related to competition, while regulatory and legal concerns are also mounting.

In July Google decreased the percentage it keeps from purchases through its Play Store, where consumers buy apps, to 15% from 30% for the first $1 million in revenue a developer earns each year.

Also this year, Apple provided the same reduction for app developers with under $1 million in annual sales. As part of a lawsuit filed by Epic Games, a judge in California ruled this month that Apple will no longer be allowed to prohibit developers from providing links or other communications that direct users away from Apple in-app purchasing.

Meanwhile, in August, Microsoft lowered the percentage of sales it keeps from game purchases from its Windows app store to 12% from 30%.

On Google’s cloud marketplace, customers can find products from prominent software companies, including Confluent, Elastic, MongoDB and Twilio. But it lacks products from companies such as Accenture, Equifax, FactSet, Freshworks, Hewlett Packard Enterprise and Xilinx, which all have listings in the AWS marketplace.

AWS, the market leader, charges a listing fee of about 5%, according to an estimate earlier this year from analysts at UBS. The AWS marketplace generates about $1 billion to $2 billion in annual revenue, they said. Amazon declined to comment.

Microsoft said in July that it had cut its rate from 20% to 3%.

“Our fees are only intended to offset our operational costs of invoicing and billing customers, and operating the marketplace,” Charlotte Yarkoni, chief operating officer for cloud and artificial intelligence at Microsoft, said in a statement. “We are not trying to take a share of our partners’ revenue. Our ecosystem is a channel for us to help partners sell their solutions, not the other way around, unlike other cloud vendors.”

Google has yet to turn its cloud platform into a profit engine for parent company Alphabet. In the s econd quarter, Google reported a $591 million operating loss from its cloud segment on $4.6 billion in revenue. Alphabet still counts on advertising for about 82% of revenue and substantially all of its profit.

Google lowers its cloud marketplace revenue share to 3% from 20% (cnbc.com)

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/29/2021 1:39:25 PM
   of 15580
 
...scientists at Google DeepMind have developed an artificial intelligence-based forecasting system which they claim can more accurately predict the likelihood of rain within the next two hours than existing systems.

Thanks to Glenn Petersen

Do I need a brolly? Google uses AI to try to improve two-hour rain forecasts

‘Precipitation nowcasting’ is attempt to predict weather more accurately in short term

Linda Geddes Science correspondent
The Guardian
Wed 29 Sep 2021 11.00 EDT



Google DeepMind scientists claim their system can more accurately predict the likelihood of rain within the next two hours. Photograph: Phil Westlake/News Images/REX/Shutterstock
------------------------------

Weather forecasts are notoriously bad at predicting the chances of impending rain – as anyone who has been drenched after leaving the house without an umbrella can testify.

Now, scientists at Google DeepMind have developed an artificial intelligence-based forecasting system which they claim can more accurately predict the likelihood of rain within the next two hours than existing systems.

Today’s weather forecasts are largely driven by powerful numerical weather prediction (NWP) systems, which use equations that describe the movement of fluids in the atmosphere to predict the likelihood of rain and other types of weather.

“These models are really amazing from six hours up to about two weeks in terms of weather prediction, but there is area – especially around zero to two hours – in which the models perform particularly poorly,” said Suman Ravuri, a staff research scientist at DeepMind in London and co-lead of the project.

“Precipitation nowcasting” is an attempt to fill this blind spot. Dr Peter Dueben, coordinator of machine learning and AI activities at the European Centre for Medium-Range Weather Forecasts, who was not involved in the research, said: “In nowcasting, what we try to do is to take observations from now, and try to make predictions of how the weather is going to look in a couple of minutes to a couple of hours. Machine learning can help you to build a tool that is extremely fast.”

DeepMind was not the only group that was attempting to develop such tools, but it was currently leading the field, he added. Its technology draws on high-resolution radar data, which can track the amount of moisture in the air by repeatedly firing a beam into the lower atmosphere and measuring the relative speed of the signal, which is slowed by water vapour.

Drawing on conversations with Met Office meteorologists about the types of weather prediction tools that would be most useful , Ravuri and his colleagues used a machine learning approach called generative modelling to develop a tool that could make probabilistic predictions of medium to heavy rainfall for the next 90 minutes, based on the past 20 minutes of high-resolution radar data.

As well as affecting individuals, heavy rain can disrupt transport and energy supply networks and agriculture.

DeepMind’s tool was evaluated alongside two existing rain prediction tools by more than 50 Met Office meteorologists, who ranked it first for accuracy and usefulness in 88% of cases. The results are published in Nature.

The DeepMind senior staff scientist Shakir Mohamed said: “AI has the potential to aid us in answering some of the most complex scientific questions in environmental science, such as climate change.

“This trial shows that AI could be a powerful tool right now by enabling forecasters to spend less time trawling through ever growing piles of prediction data and instead better understand the implications of their forecasts.”

Niall Robinson, the head of partnerships and product innovation at the Met Office, said: “Extreme weather has catastrophic consequences including loss of life and, as the effects of climate change suggest, these types of events are set to become more common. As such, better short-term weather forecasts can help people stay safe and thrive. This research demonstrates the potential AI may offer as a powerful tool for improving our short-term forecasts and our understanding of how our weather patterns are evolving.”

Dueben added that it was encouraging to see a big tech company such as Google working with expert meteorologists to develop new forecasting tools: “You can build the perfect tool, but if it is not going to be used by the forecasters it is pointless.

“I think this combination of the collaboration between Google and the Met Office, the involvement of the forecasters, and the new generative modelling approach which provides a new way to represent the distinct weather situations and the certainty of those predictions, makes this a significant step forward.”

Do I need a brolly? Google uses AI to try to improve two-hour rain forecasts | UK weather | The Guardian

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/29/2021 1:46:50 PM
1 Recommendation   of 15580
 
Google is redesigning Search using A.I. technologies and new features

Sarah Perez

September 29, 2021



Google announced today it will be applying A.I. advancements, including a new technology called Multitask Unified Model (MUM) to improve Google Search. At the company’s Search On event, the company demonstrated new features, including those that leverage MUM, to better connect web searchers to the content they’re looking for, while also making web search feel more natural and intuitive.

One of the features being launched is called “Things to know,” which will focus on making it easier for people to understand new topics they’re searching for. This feature understands how people typically explore various topics and then shows web searchers the aspects of the topic people are most likely to look at first.

For example, Google explained, if you were searching for “acrylic painting,” it may suggest “Things to know” like how to get started with painting, step-by-step, or the different styles of acrylic painting, tips about acrylic painting, how to clean acrylic paint, and more. In this example, Google is able to identify over 350 different topics related to acrylic painting, it notes.

This feature will launch in the coming months, but Google notes it will also be expanded in the future by using MUM to help web users unlock even deeper insights into the topic beyond what they may have thought to look for — like “how to make acrylic paintings with household items.”

The company is also developing new ways to help web users both refine and broaden their searches without having to start over with a new query.

To continue the acrylic painting example, Google may offer to connect you to information about specific painting techniques, like puddle pouring, or art classes you could take. You could then zoom into one of those other topics in order to see a visually rich page of search results and ideas from across the web, including articles, images, videos, and more.



These pages are meant to better compete with Pinterest, it seems, as they’re also able to help people become inspired by searches — similar to how Pinterest’s image-heavy pinboard aims to turn people’s visual inspiration into action — like visiting a website or making an online purchase.

Google says the pages will be useful for searches where users are “looking for inspiration,” like “Halloween decorating ideas” or “indoor vertical garden ideas” or other ideas to try. This feature can be tried out today on mobile devices.

Google is also upgrading video search. Already, the company uses A.I. to identify key moments inside a video. Now, it will take things further with the launch of a feature that will identify the topics in a video — even if the topic isn’t explicitly mentioned in the video — then provide links that allow users to dig deeper and learn more.

That means when you’re watching a YouTube video, MUM will be used to understand what the video is about and make suggestions. In an example, a video about Macaroni penguins may point users to a range of related videos, like those that talk about how Macaroni penguins find their family members and navigate predators. MUM can identify these terms to search for, even if they’re not explicitly said in the video.

This feature will roll out in an initial version on YouTube Search in the weeks ahead, and will be updated to include more visual enhancements in the coming months, says Google.

This change could also help to drive increased search traffic to Google, by leveraging YouTube’s sizable reach. Many Gen Z users already search for online content differently than older generations, studies have found. They tend to use multiple social media channels, have a mobile-first mindset, and are engaged with video content. A “Think with Google” study, for instance, found that 85% of Gen Z teenagers would use YouTube regularly to find content, while 80% said YouTube videos had successfully taught them something. Other data had demonstrated that Gen Z prefers to learn about new ideas and products through video as well, not text, native ads, or other content formats.

For Google, this sort of addition may be necessary because the shift to mobile is impacting its search dominance. Today, many mobile shopping searches today now start directly on Amazon. Plus, when iPhone users need to do something specific on their phone, they often turn to Siri, Spotlight, the App Store, or a native app to get help.

Google also today unveiled how it’s using MUM technology to improve visual searches using Google Lens.

techcrunch.com

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10