recentpopularlog in

jerryking : predictive_analytics   24

Opinion | How Artificial Intelligence Can Save Your Life
June 24, 2019 | The New York Times | By David Brooks.
Opinion Columnist

In his book “Deep Medicine,” which is about how A.I. is changing medicine across all fields, Eric Topol describes a study in which a learning algorithm was given medical records to predict who was likely to attempt suicide. It accurately predicted attempts nearly 80 percent of the time. By incorporating data of real-world interactions such as laughter and anger, an algorithm in a similar study was able to reach 93 percent accuracy.....
algorithms  artificial_intelligence  books  David_Brooks  depression  diagnostic  doctors  medical  mens'_health  mental_health  op-ed  pattern_recognition  predictive_analytics  tools  visual_cues 
june 2019 by jerryking
What Land Will Be Underwater in 20 Years? Figuring It Out Could Be Lucrative
Feb. 23, 2018 | The New York Times | By Brad Plumer

In Charleston, S.C., where the ports have been expanding to accommodate larger ships sailing through the newly widened Panama Canal, a real-estate developer named Xebec Realty recently went looking for land to build new warehouses and logistics centers.

But first, Xebec had a question: What were the odds that the sites it was considering might be underwater in 10 or 20 years?......Yet detailed information about the city’s climate risks proved surprisingly hard to find. Federal flood maps are based on historical data, and won’t tell you how sea-level rise could exacerbate flooding in the years ahead.....So Xebec turned to a Silicon Valley start-up called Jupiter, which offered to analyze local weather and hydrological data and combine it with climate model projections to assess the potential climate risks Xebec might face in Charleston over the next few decades from things like heavier rainfall, sea level rise or increased storm surge....the reliability of Jupiter's predictive analytics is uncertain....that said, “In economics, information has value if you would make a different decision based on that information,”...... Congress has generally underfunded initiatives such as those at the Federal Emergency Management Agency to incorporate climate change into its federal flood maps.......to get a full picture of flooding risk, you need expertise in weather, but also climate and hydrology and engineering and running complex models on the latest computer hardware,” ... “All of those specialized disciplines are usually heavily siloed within the public sector or the scientific community.”....Jupiter, which acknowledges the uncertainties in climate forecasting, will have to prove that a market exists....flooding and other disasters have led to record losses by insurers.....[Those] losses raised the stakes in terms of trying to get the best possible science on your side when you’re pricing risk,” said John Drzik, president of global risk at Marsh,
climate_change  weather  start_ups  data_driven  forecasting  hard_to_find  predictive_analytics  tools  Charleston  South_Carolina  uncertainty  sea-level_rise  floods  commercial_real_estate  adaptability  specificity  catastrophes  catastrophic_risk  unpredictability  coastal  extreme_weather_events  insurance  FEMA  cartography  floodplains  flood-risk  flood-risk_maps  mapping 
february 2018 by jerryking
Algos know more about us than we do about ourselves
NOVEMBER 24, 2017 | Financial Time | John Dizard.

When intelligence collectors and analysts take an interest in you, they usually start not by monitoring the content of your calls or messages, but by looking at the patterns of your communications. Who are you calling, how often and in what sequence? What topics do you comment on in social media?

This is called traffic analysis, and it can give a pretty good notion of what you and the people you know are thinking and what you are preparing to do. Traffic analysis started as a military intelligence methodology, and became systematic around the first world war. Without even knowing the content of encrypted messages, traffic analysts could map out an enemy “order of battle” or disposition of forces, and make inferences about commanders’ intentions.

Traffic analysis techniques can also cut through the petabytes of redundant babble and chatter in the financial and political worlds. Even with state secrecy and the forests of non-disclosure agreements around “proprietary” investment or trading algorithms, crowds can be remarkably revealing in their open-source posts on social media.

Predata, a three-year-old New York and Washington-based predictive data analytics provider, has a Princeton-intensive crew of engineers and international affairs graduates working on early “signals” of market and political events. Predata trawls the open metadata for users of Twitter, Wikipedia, YouTube, Reddit and other social media, and analyses it to find indicators of future price moves or official actions.

I have been following their signals for a while and find them to be useful indicators. Predata started by creating political risk indicators, such as Iran-Saudi antagonism, Italian or Chilean labour unrest, or the relative enthusiasm for French political parties. Since the beginning of this year, they have been developing signals for financial and commodities markets.

The 1-9-90 rule
1 per cent of internet users initiate discussions or content, 9 per cent transmit content or participate occasionally and 90 per cent are consumers or ‘lurkers’

Using the example of the company’s BoJ signal. For this, Predata collects the metadata from 300 sources, such as Twitter users, contested Wikipedia edits or YouTube items created by Japanese monetary policy geeks. Of those, at any time perhaps 100 are important, and 8 to 10 turn out to be predictive....This is where you need some domain knowledge [domain expertise = industry expertise]. It turns out that Twitter is pretty important for monetary policy, along with the Japanese-language Wiki page for the Bank of Japan, or, say, a YouTube video of [BoJ governor] Haruhiko Kuroda’s cross-examination before a Diet parliamentary committee.

“Then you build a network of candidate discussions [JK: training beds] and look for the pattern those took before historical moves. The machine-learning algorithm goes back and picks the leads and lags between traffic and monetary policy events.” [Jk: Large data sets with known correct answers serve as a training bed and then new data serves as a test bed]

Typically, Predata’s algos seem to be able to signal changes in policy or big price moves [jk: inflection points] somewhere between 2 days and 2 weeks in advance. Unlike some academic Twitter scholars, Predata does not do systematic sentiment analysis of tweets or Wikipedia edits. “We only look for how many people there are in the conversation and comments, and how many people disagreed with each other. We call the latter the coefficient of contestation,” Mr Shinn says.

The lead time for Twitter, Wiki or other social media signals varies from one market to another. Foreign exchange markets typically move within days, bond yields within a few days to a week, and commodities prices within a week to two weeks. “If nothing happens within 30 days,” says Mr Lee, “then we say we are wrong.”
algorithms  alternative_data  Bank_of_Japan  commodities  economics  economic_data  financial_markets  industry_expertise  inflection_points  intelligence_analysts  lead_time  machine_learning  massive_data_sets  metadata  non-traditional  Predata  predictive_analytics  political_risk  signals  social_media  spycraft  traffic_analysis  training_beds  Twitter  unconventional 
november 2017 by jerryking
Novartis’s new chief sets sights on ‘productivity revolution’
SEPTEMBER 25, 2017 | Financial Times | Sarah Neville and Ralph Atkins.

The incoming chief executive of Novartis, Vas Narasimhan, has vowed to slash drug development costs, eyeing savings of up to 25 per cent on multibillion-dollar clinical trials as part of a “productivity revolution” at the Swiss drugmaker.

The time and cost of taking a medicine from discovery to market has long been seen as the biggest drag on the pharmaceutical industry’s performance, with the process typically taking up to 14 years and costing at least $2.5bn.

In his first interview as CEO-designate, Dr Narasimhan says analysts have estimated between 10 and 25 per cent could be cut from the cost of trials if digital technology were used to carry them out more efficiently. The company has 200 drug development projects under way and is running 500 trials, so “that will have a big effect if we can do it at scale”.......Dr Narasimhan plans to partner with, or acquire, artificial intelligence and data analytics companies, to supplement Novartis’s strong but “scattered” data science capability.....“I really think of our future as a medicines and data science company, centred on innovation and access.”

He must now decide where Novartis has the capability “to really create unique value . . . and where is the adjacency too far?”.....Does he need the cash pile that would be generated by selling off these parts of the business to realise his big data vision? He says: “Right now, on data science, I feel like it’s much more about building a culture and a talent base . . . ...Novartis has “a huge database of prior clinical trials and we know exactly where we have been successful in terms of centres around the world recruiting certain types of patients, and we’re able to now use advanced analytics to help us better predict where to go . . . to find specific types of patients.

“We’re finding that we’re able to significantly reduce the amount of time that it takes to execute a clinical trial and that’s huge . . . You could take huge cost out.”...Dr Narasimhan cites one inspiration as a visit to Disney World with his young children where he saw how efficiently people were moved around the park, constantly monitored by “an army of [Massachusetts Institute of Technology-]trained data scientists”.
He has now harnessed similar technology to overhaul the way Novartis conducts its global drug trials. His clinical operations teams no longer rely on Excel spreadsheets and PowerPoint slides, but instead “bring up a screen that has a predictive algorithm that in real time is recalculating what is the likelihood our trials enrol, what is the quality of our clinical trials”.

“For our industry I think this is pretty far ahead,” he adds.

More broadly, he is realistic about the likely attrition rate. “We will fail at many of these experiments, but if we hit on a couple of big ones that are transformative, I think you can see a step change in productivity.”
adjacencies  algorithms  analytics  artificial_intelligence  attrition_rates  CEOs  data_driven  data_scientists  drug_development  failure  Indian-Americans  kill_rates  massive_data_sets  multiple_targets  Novartis  pharmaceutical_industry  predictive_analytics  productivity  productivity_payoffs  product_development  real-time  scaling  spreadsheets  Vas_Narasimhan 
november 2017 by jerryking
GE, Cisco flex major muscle in trend toward 'Industrial Internet' - The Globe and Mail
DAVID MILSTEAD
The Globe and Mail
Published Friday, Jun. 05, 2015

What GE did, says William Blair & Co. analyst Nicholas Heymann, is write software to collect data from its equipment – from locomotives to jet engines – and develop algorithms that help its customers make better plans, like a railway predicting where to add capacity based on port traffic, or where an airline should develop a hub for travel in 2020....Cisco, the global leader in the routers that allow computer networks to communicate, has spent $1-billion setting up six global “Internet of Everything” data centres and committed $100-million to an innovation fund. It’s promoting app development in developer communities and is working to create technical standards for the industry. It’s deployed Internet of Things offerings at several major customers, including Shell and Harley-Davidson,
sensors  Industrial_Internet  GE  Cisco  algorithms  predictive_analytics 
june 2015 by jerryking
The Sensor-Rich, Data-Scooping Future - NYTimes.com
APRIL 26, 2015 | NYT | By QUENTIN HARDY.

Sensor-rich lights, to be found eventually in offices and homes, are for a company that will sell knowledge of behavior as much as physical objects....The Internet will be almost fused with the physical world. The way Google now looks at online clicks to figure out what ad to next put in front of you will become the way companies gain once-hidden insights into the patterns of nature and society.

G.E., Google and others expect that knowing and manipulating these patterns is the heart of a new era of global efficiency, centered on machines that learn and predict what is likely to happen next.

“The core thing Google is doing is machine learning,” Eric Schmidt....The great data science companies of our sensor-packed world will have experts in arcane reaches of statistics, computer science, networking, visualization and database systems, among other fields. Graduates in those areas are already in high demand.

Nor is data analysis just a question of computing skills; data access is also critically important. As a general rule, the larger and richer a data set a company has, the better its predictions become. ....an emerging area of computer analysis known as “deep learning” will blow away older fields.

While both Facebook and Google have snapped up deep-learning specialists, Mr. Howard said, “they have far too much invested in traditional computing paradigms. They are the equivalent of Kodak in photography.” Echoing Mr. Chui’s point about specialization, he said he thought the new methods demanded understanding of specific fields to work well.

It is of course possible that both things are true: Big companies like Google and Amazon will have lots of commodity data analysis, and specialists will find niches. That means for most of us, the answer to the future will be in knowing how to ask the right kinds of questions.
sensors  GE  GE_Capital  Quentin_Hardy  data  data_driven  data_scientists  massive_data_sets  machine_learning  automated_reasoning  predictions  predictive_analytics  predictive_modeling  layer_mastery  core_competencies  Enlitic  deep_learning  niches  patterns  analog  insights  latent  hidden  questions  Google  Amazon  aftermath  physical_world  specialization  consumer_behavior  cyberphysical  arcane_knowledge  artificial_intelligence  test_beds 
april 2015 by jerryking
Amazon to Sell Predictions in Cloud Race Against Google and Microsoft - NYTimes.com
By QUENTIN HARDY APRIL 9, 2015

Amazon Web Services announced that it was selling to the public the same kind of software it uses to figure out what products Amazon puts in front of a shopper, when to stage a sale or who to target with an email offer.

The techniques, called machine learning, are applicable for technology development, finance, bioscience or pretty much anything else that is getting counted and stored online these days. In other words, almost everything.
Quentin_Hardy  Amazon  Google  machine_learning  cloud_computing  AWS  Microsoft  Azure  predictions  predictive_analytics  predictive_modeling  automated_reasoning 
april 2015 by jerryking
The Data Companies Wish They Had About Customers - WSJ
March 23, 2014 | WSJ | by Max Taves.

We asked companies what data they wish they had—and how they would use it. Here's what they said....
(A) Dining----Graze.com has a huge appetite for data. Every hour, the mail-order snack business digests 15,000 user ratings about its foods, which it uses to better understand what its customers like or dislike and to predict what else they might like to try...more data could help him understand customers' tastes even better. Among the information he wants most is data about customers' dietary habits, such as what they buy at grocery stores, as well as better information about what they look at on Graze's own site. And because the dietary needs of children change rapidly, he'd like to know if his customers have children and, if so, their ages.
(B) Energy-----Energy consumption is among its customers' main concerns, says CEO William Lynch. For instance, the company offers a product giving homeowners the real-time ability to see things like how many kilowatts it takes to heat the hot tub in Jan. Because of privacy concerns, Savant doesn't collect homeowners' energy data. But if the company knew more about customers' energy use, it could help create customized plans to conserve energy. "We could make recommendations on how to set up your thermostat to save a lot of money,
(C) Banking-----the Bank of the West would like "predictive life-event data" about its customers—like graduation, vacation or retirement plans—to create products more relevant to their financial needs...At this point, collecting that breadth of data is a logistical and regulatory challenge, requiring very different sources both inside and outside the bank.
(D) Appliances-----Whirlpool Corp.has a vast reach in American households—but wants to know more about its customers and how they actually use its products. Real-time use data could not only help shape the future designs of Whirlpool products, but also help the company predict when they're likely to fail.
(E) Healthcare----Explorys creates software for health-care companies to store, access and make sense of their data. It holds a huge trove of clinical, financial and operational information—but would like access to data about patients at home, such as their current blood-sugar and oxygen levels, weight, heart rates and respiratory health. Having access to that information could help providers predict things like hospitalizations, missed appointments and readmissions and proactively reach out to patients,
(F) Healthcare----By analyzing patient data, Carolinas HealthCare System of Charlotte, N.C., can predict readmission rates with 80% accuracy,
(G) Law----law firms that specialize in defense work are typically reactive, however some are working towards becoming more proactive, coveting an ability to predict lawsuits—and prevent them.How? By analyzing reams of contracts and looking for common traits and language that often lead to problems.
(H) Defense---BAE Systems PLC invests heavily in protecting itself from cyberattacks. But it says better data from its suppliers could help improve its defenses...if its suppliers get cyberattacked, its own h/w and s/w could be compromised. But "those suppliers are smaller businesses with lesser investments in their security," ...A lack of trust among suppliers, even those that aren't direct competitors, means only a small percentage of them disclose the data showing the cyberattacks on their systems. Sharing that data, he says, would strengthen the security of every product BAE makes. [BAE is expressing recognition of its vulnerability to network risk].
data  data_driven  massive_data_sets  Graze  banking  cyber_security  BAE  law_firms  Whirlpool  genomics  social_data  appliances  sense-making  predictive_analytics  dark_data  insights  customer_insights  real-time  design  failure  cyberattacks  hiring-a-product-to-do-a-specific-job  network_risk  shifting_tastes  self-protection  distrust  supply_chains 
november 2014 by jerryking
They’re Tracking When You Turn Off the Lights - WSJ - WSJ
By ELIZABETH DWOSKIN
Oct. 20, 2014

Tech companies have used the technologies and techniques collectively known as big data to make business decisions and shape their customers’ experience. Now researchers are bringing big data into the public sphere, aiming to improve quality of life, save money, and understand cities in ways that weren’t possible only a few years ago....Municipal sensor networks offer big opportunities, but they also carry risks. In turning personal habits into digital contrails, the technology may tempt authorities to misuse it. While academics aim to promote privacy and transparency, some worry that the benefits of big data could be lost if the public grows wary of being monitored... Anthony Townsend, author of the book “Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia.”...The goal isn’t to sell products or spy on people, the academics say, but to bolster quality of life and knowledge of how cities function
cities  massive_data_sets  sensors  urban  privacy  smart_cities  predictive_analytics  books  quality_of_life  customer_experience  open_data  community_collaboration  white_hats 
october 2014 by jerryking
Forget the CV, data decide careers - FT.com
July 9, 2014 | FT |By Tim Smedley.

The human touch of job interviews is under threat from technology, writes Tim Smedley, but can new techniques be applied to top-level recruitment?

I no longer look at somebody's CV to determine if we will interview them or not," declares Teri Morse, who oversees the recruitment of 30,000 people each year at Xerox Services. Instead, her team analyses personal data to determine the fate of job candidates.

She is not alone. "Big data" and complex algorithms are increasingly taking decisions out of the hands of individual interviewers - a trend that has far-reaching consequences for job seekers and recruiters alike.

The company whose name has become a synonym for photocopy has turned into one that helps others outsource everyday business processes, from accounting to human resources. It recently teamed up with Evolv, which uses data sets of past behaviour to predict everything from salesmanship to loyalty.

For Xerox this means putting prospective candidates for the company's 55,000 call-centre positions through a screening test that covers a wide range of questions. Evolv then lays separate data it has mined on what causes employees to leave their call-centre jobs over the candidates' responses to predict which of them will stick around and which will further exacerbate the already high churn rate call centres tend to suffer.

The results are surprising. Some are quirky: employees who are members of one or two social networks were found to stay in their job for longer than those who belonged to four or more social networks (Xerox recruitment drives at gaming conventions were subsequently cancelled). Some findings, however, were much more fundamental: prior work experience in a similar role was not found to be a predictor of success.

"It actually opens up doors for people who would never have gotten to interview based on their CV," says Ms Morse. Some managers initially questioned why new recruits were appearing without any prior relevant experience. As time went on, attrition rates in some call centres fell by 20 per cent and managers no longer quibbled. "I don't know why this works," admits Ms Morse, "I just know it works."

Organisations have long held large amounts of data. From financial accounts to staff time sheets, the movement from paper to computer made it easier to understand and analyse. As computing power increased exponentially, so did data storage. The floppy disk of the 1990s could store barely more than one megabyte of data; today a 16 gigabyte USB flash drive costs less than a fiver ($8).

It is simple, then, to see how recruiters arrive at a point where crunching data could replace the human touch of job interviews. Research by NewVantage Partners, the technology consultants, found that 85 per cent of Fortune 1000 executives in 2013 had a big data initiative planned or in progress, with almost half using big data operationally.

HR services provider Ceridian is one of many companies hoping to tap into the potential of big data for employers. "From an HR and recruitment perspective, big data enables you to analyse volumes of data that in the past were hard to access and understand," explains David Woodward, chief product and innovation officer at Ceridian UK.

This includes "applying the data you hold about your employees and how they've performed, to see the causal links between the characteristics of the hire that you took in versus those that stayed with you and became successful employees. Drawing those links can better inform your decisions in the hiring process."

Data sets need not rely on internal data, however. The greatest source of big data is the internet, which is easy for both FTSE 100 and smaller companies to access.

"Social media data now gives us the ability to 'listen' to the business," says Zahir Ladhani, vice-president at IBM Smarter Workforce. "You can look at what customers are saying about your business, what employees are saying, and what you yourself are saying - cull all that data together and you can understand the impact.

"Most recruitment organisations now use social media and job-site data," says Mr Ladhani. "We looked at an organisation which had very specialised, very hard to find skill sets. When we analysed the data of the top performers in that job family, we found out that they all hung out at a very unique, niche social media site. Once we tapped into that database, boom!"

Ceridian, too, has worked with companies to "effectively scan the internet to see what jobs are being posted through the various job boards, in what parts of the country," says Mr Woodward. "If you're looking to open a particular facility in a part of the country, for example, you'll be able to see whether there's already a high demand for particular types of skills."

Experts appear split on whether the specialisation required for executive recruitment lends itself to big data.

"I hire 30,000 call-centre people on an annual basis - we don't hire that many executives," says Ms Morse, adding "there's not enough volume". However Mr Ladhani disagrees, believing that over time the data set an organisation holds on senior management hires would become statistically valid.

As more companies start to analyse their employee data to make hiring decisions, could recruitment finally become more of a science than an art?

"The potential is clearly much greater now than ever before to crunch very large volumes of data and draw conclusions from that which can make better decisions," says Mr Woodward. "The methods and computing power being used in weather forecasting 10 years ago are now available to us all . . . who knows where this may go."

It is a trend worth considering - to get your next job, perfecting your CV could well be less important than having carefully considered the footprint you leave in cyberspace.

Case study Demographic drilling-down helps LV=recast recruitment ads

Kevin Hough, head of recruiting at insurance firm LV=, was a pioneer of big data before he had heard the term.

A year ago, the question of where best to target the firm's recruitment advertising provided an innovative answer. LV= looked up the postcodes at which its current staff lived and organised the findings by the employee's level of seniority, explains Mr Hough. "Using software called Geo-Maps, which works similarly to Google Maps, we could zoom in and out of clusters of our people to see where they are willing to travel from to get to work."

Next, the insurer looked at the locations from which candidates were applying and compared those with the postcodes of current staff. It also looked at the locations and interests of its followers on social media sites, such as Facebook and LinkedIn. The analysis included their interests, stated sexual orientation, ethnicity and gender.

This allowed the firm to create a profile of its typical, successful candidate, also taking into consideration their age and location.

"What was really interesting was the reach some of our advertising was having and, more importantly, some of the gaps," Mr Hough says.

The analysis, which took little investment or expertise, has allowed LV= to redesign its recruitment advertising.

"Sometimes, with all the clever systems that people have in organisations, you can be blinded to the simple, raw data that is there," says Mr Hough.

Next, LV= will add performance review data, taking the analysis to a higher level. He explains that this piece of work will ask who of the group recruited a year before is still there.

"It will help shape not only how we attract people, but will even start to shape some of the roles themselves," he says.

Tim Smedley

By Tim Smedley
massive_data_sets  Xerox  Evolv  analytics  predictive_analytics  job_boards  data  data_driven  hiring  Managing_Your_Career  unstructured_data  hard_to_find 
july 2014 by jerryking
A 25-Question Twitter Quiz to Predict Retweets - NYTimes.com
JULY 1, 2014 | NYT | Sendhil Mullainathan.

how “smart” algorithms are created from big data: Large data sets with known correct answers serve as a training bed and then new data serves as a test bed — not too differently from how we might learn what our co-workers find funny....one of the miracles of big data: Algorithms find information in unexpected places, uncovering “signal” in places we thought contained only “noise.”... the Achilles’ heel of prediction algorithms--being good at prediction often does not mean being better at creation. (1) One barrier is the oldest of statistical problems: Correlation is not causation.(2) an inherent paradox lies in predicting what is interesting. Rarity and novelty often contribute to interestingness — or at the least to drawing attention. But once an algorithm finds those things that draw attention and starts exploiting them, their value erodes. (3) Finally, and perhaps most perversely, some of the most predictive variables are circular....The new big-data tools, amazing as they are, are not magic. Like every great invention before them — whether antibiotics, electricity or even the computer itself — they have boundaries in which they excel and beyond which they can do little.
predictive_analytics  massive_data_sets  limitations  algorithms  Twitter  analytics  data  data_driven  Albert_Gore  Achilles’_heel  boundary_conditions  noise  signals  paradoxes  correlations  causality  counterintuitive  training_beds  test_beds  rarity  novelty  interestingness  hard_to_find 
july 2014 by jerryking
To hire without using ads or recruiters - genius or folly? - The Globe and Mail
Aug. 28 2013 | The Globe and Mail | SUSAN SMITH.

That’s the challenge faced by Wojciech Gryc, 27, who started Canopy Labs a year and a half ago in Toronto. The company makes software for businesses that want to track their customers’ preferences using data analytics....The product compiles information from e-mail, e-commerce sites, social media, voice mail and call centres to help predict how likely people are to remain customers, how much they are likely to spend and which marketing messages they are likely to respond to.
hiring  Toronto  start_ups  predictive_analytics  data_scientists  recruiting  DIY  fallacies_follies 
october 2013 by jerryking
Traffic accidents: predict, then prevent
March 19, 2013 | G&M | NICK ROCKEL.
Edmonton cut crash deaths and injuries by nearly 40 per cent by crunching data ranging from road design to license plates....Beyond looking for existing patterns to improve traffic safety, OTS has started peering into the future, with the use of predictive analytics.

Last year, its analytics team collaborated with the University of Alberta's departments of Civil and Environmental Engineering, and Earth and Atmospheric Sciences, to build a computer model that shows, based on snow conditions, when to expect a higher number of collisions.

This weather model predicts collision trends seven days in advance with about 90-per-cent accuracy, Mr. Shimko says. As conditions turn hazardous, the city's Traffic Management Centre can use this information as part of its digital sign messaging.

As well, OTS works with the Edmonton Police Service on several initiatives to reduce speeding and other traffic violations.

Last year it began collecting and analyzing data on every licence plate detected via intersection safety devices, photo speed enforcement and other automated means, and giving high-risk drivers' plates to the police. "Then the police can target their resources based on those high-risk drivers," who typically have more injury collisions, Mr. Shimko says. "Some of the initial results are quite promising."

Special to The
Edmonton  predictive_analytics  analytics  data  prevention  engineering  data_driven  massive_data_sets  weather  traffic_congestion  OTS  collisions 
march 2013 by jerryking
How to decode what lies ahead
January 22, 2013 | | CHRIS ATCHISON.
Companies are using vast data stores to predict everything from consumer trends to maintenance needs, to gain competitive advantage
predictive_analytics  CAA  data_driven  metrics  massive_data_sets 
january 2013 by jerryking
Jeff Hawkins Develops a Brainy Big Data Company - NYTimes.com
November 28, 2012, 12:13 pmComment
Jeff Hawkins Develops a Brainy Big Data Company
By QUENTIN HARDY

Jeff Hawkins, who helped develop the technology in Palm, an early and successful mobile device, is a co-founder of Numenta, a predictive software company....Numenta’s product, called Grok, is a cloud-based service that works much the same way. Grok takes steady feeds of data from things like thermostats, Web clicks, or machinery. From initially observing the data flow, it begins making guesses about what will happen next. The more data, the more accurate the predictions become.
massive_data_sets  Grok  pattern_recognition  start_ups  streaming  aftermath  cloud_computing  predictions  predictive_analytics  Quentin_Hardy 
november 2012 by jerryking
A Shout-Out for Segmentation Data - BusinessWeek
March 15, 2011, BusinessWeek By G. Michael Maddock and
Raphael Louis Vitón .Quit yawning and start seizing on the wealth
within segmentation data. Every department should demand to see this
information. a simple, three-part formula:

Step 1. Define success. Get as specific as possible. Step 2. Define the
characteristics you want your segment to have. Step 3. "Simply" find
what predicts/correlates with these variables.

Having decided whom you want to go after, find the variables that will
lead you to these people. Asking Lots of Questions

Having identified this market, you go out and ask the potential
customers within it as many questions as you can think of—how much they
weigh, what snacks they eat, whether they have kids or a pet. Then you
sort through the data and look for commonalities (Step 3).
segmentation  market_segmentation  market_research  questions  JCK  sorting  correlations  predictive_analytics  ethnography  think_threes 
march 2011 by jerryking

Copy this bookmark:





to read