Epictetus: “Remember that it is we who torment, we who make difficulties for ourselves – that is, our opinions do. What, for instance, does it mean to be insulted? Stand by a rock and insult it, and what have you accomplished? If someone responds to insult like a rock, what has the abuser gained with his invective?”
Curt Long
100 Best Jobs
Very Interesting…
From: https://money.usnews.com/careers/best-jobs/rankings/the-100-best-jobs
What You’ll Need to Know In 2020 That You Don’t Know Now
By the year 2020, you will have to learn to drive a more automated car. You’ll get behind the wheel of a smart car that avoids fender benders by braking before you even see danger looming. At a much later date, you will slip into a bucket seat as if at the movies— snacks, reading material, and sodas at the ready— sit back, relax, program the car, and over the freeways to grandmother’s house you’ll go. Like the toaster and coffeemaker back home, the car’s sensors will monitor the activity and destinations of other cars on the road. “Going my way?” your vehicle will bleep in autospeak. “Indeed,” responds the living room on wheels in the left lane. And the two will hitch up and rocket toward their common goal together. This technology will conserve fuel and may save lives, but the pleasure of driving as you know it will be gone. That’s something you should know.
But perhaps by now you’ve realized that for every convenience technology bestows upon us, it chips away at something else. All of us, great souls as well as lost ones, must in time wrestle with this notion. If you are the poet Blake, in metered rhyme you decry the Satanic mills; if you are Kaczynski, you take to the hills and spit death by snail mail. Most of us simply acknowledge the trade-offs and move on.
Each time we do this, though, we march farther away from a world we can touch and comprehend in our bones toward one that we pray will work better. Consider: In the year 2020, you’ll identify yourself, gain access to homes and businesses, and board aircraft after a laser has measured the shape of your irises. But the price will be loss of privacy. A record of your transactions, your daily comings and goings, will be just a keyboard tap away from others.
Booting up your home PC has already become a public act. Meander the Web today, and almost every move you make is cataloged in service to the gods of commerce. They know what you’re buying. What you listen to. Where you chat.
By 2020 you’ll need to know how to clean up that electronic trail day in and day out. “Say you were searching for information on hats,” theorizes Jaron Lanier, computer scientist, musical composer, and virtual reality pioneer, “and you saw a link about hats, but when you got to it, it was actually a weird pornography site about hat fetishes. Then it turns out there’s a record that you visited this site, and now you’re getting bombarded with offers from people with hat fetishes. Furthermore, your friends are being contacted in case they have hat fetishes. All of a sudden you’re the hat fetish person in your social circle, and you have to go in and undo it.”
To throw the hounds off your scent, Lanier says, you could spend the afternoon downloading the Great Books or posing as a do-gooder in search of charities deserving of your drachmas.In time, you’ll be wielding electronica for the same reasons medieval crusaders took up sword and lance: to ward off intruders. Rooting out destructive viruses and spam in your equipment will become old hat, as will the regular checks you’ll be performing on your groceries and yourself. Tomorrow’s Kaczynskis will be able to concoct harmful viruses and insinuate them into the food supply, or perhaps release pathogens in public places. You’ll need to be ready for them. Daily computer checkups of your blood, saliva, or bodily waste will be effortless, the medical equivalent of checking your stock portfolio. “Real-time monitoring,” says James Weiland, assistant professor of ophthalmology at Johns Hopkins, “will tell you in the morning what vitamin your body is low on and what to have for breakfast.”
With all this new information, you’ll stand a better chance of living well beyond your biblical allotment of threescore and ten. More than 200,000 centenarians will inhabit the United States in 2020— why shouldn’t you be one of them?
To reach that age you’ll need to know enough to make more complicated medical choices: Do I want to jettison a limb and wait five years to regrow another? Shall I allow a phalanx of nanobots to scrape the plaque out of my arteries or opt to replace the vessels altogether? “Amateurs may be fooling around with black-market genetic manipulation,” says Marvin Minsky, one of the founders of the Artificial Intelligence Lab at MIT, “maybe extending their lives by lengthening their own telomeres, the ends of chromosomes believed to control life span. Or they might, in fact, be growing new features in their brain.”
By the year 2020, science will understand the Creator’s software well enough to tell you a great deal about the genetic hand dealt you and those you love. Science may even help you decide if you should quit loving them. These days it’s not unheard of for one partner to investigate the other’s background or assets before marrying.
In the future you’ll need to access your betrothed’s genetic map, see what diseases he or she is likely to contract, assess the appearance and health of your children, and perhaps even size up your love’s mental health. Of course, this swings both ways. In this world, you will be forced to ask: Do I want to know if I’m earmarked for heart disease or breast cancer? Do I want my potential spouse to know? If I know this, and my doctor knows, does it mean that my insurance carrier must know? If this last one scares you, it should. It could mean the end of health care as you know it.
This is just the beginning. Once we know the future, we’re going to be tempted to rewrite the software. Clearly, it would be an act of kindness to reach into that fragile, permeable, four- or eight-cell being and rid it of the disease that cut short the life of its great-grandfather. But why wait for conception? Why not design your kid, toes up, out of whole cloth: the blue-eye gene, the blond-hair gene, the excel-at-lacrosse gene. Ban such tinkering, and citizens will merely scurry underground in order to conceive the perfect child.
If we can tear ourselves away from such selfish goals long enough to look around, we will have to face the fact that technology favors some and eclipses others. Bill Robinson, who spent 30 years as an electrical engineer with Canada’s Nortel Networks, has been thinking about this issue recently. “We spend our time and effort creating exciting new communications technologies,” he writes, “yet half the world does not have access to a telephone. We use the Internet to order the latest novel, yet many people in the world don’t have access to books. We are now discussing embedded processors to connect our refrigerators to bathroom scales and the grocery store, yet many children in the world go to bed hungry at night.”
This grisly reality will be harder to hide from when our planet swells to 8 billion people in 2020. For Lanier, the most heartbreaking scenario is festering in the third world, where, he believes, the current generation of children— lacking food, lacking skills, lacking aid, lacking education— will be lost in the next techno-revolution. “What is going to happen to all these people as they start to age, say, 20 years from now?” he wonders. “You’re going to have to somehow live while you watch a billion people starve, which is going to be a new human experience. How will we do that?”Good question. And just one of many difficult questions waiting. How can I choose between two genetic scripts for a child I have yet to know? How much of myself should I reveal on the Web? How will I cope with all these machines when they break down, including the self-replicating nanopests that may be residing in my flesh? In our zeal to be happy little technologists, we’ll turn, much as we do today, to the Web for answers. And we’ll perfect the art of being disappointed.
If any medium ever resembled the human unconscious, the Web is it: a place of hidden wonders, stray inane thoughts, peaks of brilliance, valleys of perversity. And no apparent governor. Type your query, hit return, and voilà!— 10,000 hits. Good luck shaking them down.
Even in 2020 you will always need to know if the facts you’ve dredged up are accurate and truthful. With so many sources doling out information, you will need to know: What is he selling, and why is he selling it? Most unsettling is the fact that these precious touchstones are not permanent. They never will find their way to the library stacks. Instead we are moving closer to Orwell’s nightmare: the truth ceaselessly modified, altered, edited, or altogether obliterated. Here today, gone tomorrow, with nothing but a bewildering ERROR 404 FILE NOT FOUND left in its place.
By then, you will no longer be a child of the 21st century. If anything, you’ll be an elder, your mind and body augmented, your chromosomes refreshed, flexible computers woven into the four corners of your garments. On the one hand, your workload will multiply as you bat away each glitch resulting from the increased number of gadgets in your life. On the other, you will be forced to take on moral questions no human has ever faced. When will you find time to do that? How will you contemplate when everything is speeding up and time for reflection is practically nonexistent?
That’s you in 20 years. Like the machine that inspired your age, you will be constantly scanning, processing, sifting, searching for a code to guide you through. And yet the key, the compass, the answer, was once offered in a temple at Delphi. What will you need to know in 2020? Yourself.
— reporting by Glenn Garelik Web Resources: For more information about key people and topics discussed in the article, see Jaron Lanier’s Web site at www.well.com/ user/jaron, the National Human Genome Research Institute at www.nhgri. nih.gov, and the Whitehead Institute for Genomic Research at www-genome.wi. mit.edu.”
Hackers use free tools in new APT campaign against industrial sector firms
Researchers have recently detected an advanced persistent threat (APT) campaign that targets critical infrastructure equipment manufacturers by using industry-sector-themed spear-phishing emails and a combination of free tools.
This tactic fits into the “living off the land” trend of cyber espionage actors reducing their reliance on custom and unique malware programs that could be attributed to them in favor of dual-use tools that are publicly available.[ How well do you know these 9 types of malware and how to recognize them].
TOP 12 AI ETHICS RESEARCH PAPERS INTRODUCED IN 2019
Posted by Mariya Yao | Dec 5, 2019
As the importance of ethical considerations in AI applications is being recognized not only by ethicists and researchers but also by industry tech leaders, AI ethics research is moving from general definitions of fairness and bias to more in-depth analysis. The research papers introduced in 2019 define comprehensive terminology for communicating about ML fairness, go from general AI principles to specific tensions that arise when implementing AI in practice, explain the reasons behind frustrating decisions made by AI algorithms, and more.
To give you an overview of the important work done in this research area last year, we have summarized 12 research papers covering different aspects of AI ethics.
Subscribe to our AI Research mailing list at the bottom of this article to be alerted when we release new summaries.
If you’d like to skip around, here are the papers we featured:
- Controlling Polarization in Personalization: An Algorithmic Framework
- Learning Existing Social Conventions via Observationally Augmented Self-Play
- Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products
- The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions
- Problem Formulation and Fairness
- A Framework for Understanding Unintended Consequences of Machine Learning
- Fairwashing: the Risk of Rationalization
- What’s in a Name? Reducing Bias in Bios without Access to Protected Attributes
- Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But Do Not Remove Them
- Street–Level Algorithms: A Theory at the Gaps Between Policy and Decisions
- Average Individual Fairness: Algorithms, Generalization and Experiments
- Energy and Policy Considerations for Deep Learning in NLP
12 IMPORTANT AI ETHICS RESEARCH PAPERS OF 2019
1. CONTROLLING POLARIZATION IN PERSONALIZATION: AN ALGORITHMIC FRAMEWORK, BY L. ELISA CELIS, SAYASH KAPOOR, FARNOOD SALEHI, NISHEETH VISHNOI
ORIGINAL ABSTRACT
Personalization is pervasive in the online space as it leads to higher efficiency for the user and higher revenue for the platform by individualizing the most relevant content for each user. However, recent studies suggest that such personalization can learn and propagate systemic biases and polarize opinions; this has led to calls for regulatory mechanisms and algorithms that are constrained to combat bias and the resulting echo-chamber effect. We propose a versatile framework that allows for the possibility to reduce polarization in personalized systems by allowing the user to constrain the distribution from which content is selected. We then present a scalable algorithm with provable guarantees that satisfies the given constraints on the types of the content that can be displayed to a user, but – subject to these constraints – will continue to learn and personalize the content in order to maximize utility. We illustrate this framework on a curated dataset of online news articles that are conservative or liberal, show that it can control polarization, and examine the trade-off between decreasing polarization and the resulting loss to revenue. We further exhibit the flexibility and scalability of our approach by framing the problem in terms of the more general diverse content selection problem and test it empirically on both a News dataset and the MovieLens dataset.
OUR SUMMARY
Social media feeds, advertising and search results are increasingly personalized based on user preferences, which increases user engagement and platform revenue. However, as people’s biases and opinions are reinforced, and they don’t get exposed to opposing opinions, this causes an “echo chamber” or “filter bubble” effect, which leads to divisive social fragmentation. The proposed algorithm tackles this problem by placing constraints on the content that can be sampled. The experiments confirm that this approach is flexible, scalable, and effective in controlling polarization.
WHAT’S THE CORE IDEA OF THIS PAPER?
- Content is often classified into groups based on various attributes. Existing algorithms use a multi-armed bandit model to select content and receive rewards (e.g. clicks or purchases).
- This can lead to over-specialization, where results are narrowed to a small subset of groups.
- The proposed Constrained-ε-Greedy algorithm constrains the probability distribution from which content is sampled at each time step in the sampling process. These constraints limit the total weight that can be given to any single group, which prevents this over-specialization.
WHAT’S THE KEY ACHIEVEMENT?
- The researchers’ modifications of the bandit algorithm improve the regret bound (how far the algorithm’s rewards fall short of the theoretical optimum) relative to the state of the art.
- The algorithm is scalable and converges quickly to the theoretical optimum, even for the tightest constraints on the arm values selected.
- This optimum is within a factor of 2 of the unconstrained version.
WHAT DOES THE AI COMMUNITY THINK?
- The paper received the Best Paper award at ACM FAT 2019, one of the key conferences in AI ethics.
WHAT ARE FUTURE RESEARCH AREAS?
- Testing the algorithm in the field and measuring user satisfaction given diversified feeds.
- Applying the proposed approach to content which changes over time.
WHAT ARE POSSIBLE BUSINESS APPLICATIONS?
- This approach could be used to satisfy corporate social responsibility requirements, by reducing bias and unfairness and dampening the socially divisive echo-chamber effect.
2. LEARNING EXISTING SOCIAL CONVENTIONS VIA OBSERVATIONALLY AUGMENTED SELF-PLAY, BY ADAM LERER AND ALEXANDER PEYSAKHOVICH
ORIGINAL ABSTRACT
In order for artificial agents to coordinate effectively with people, they must act consistently with existing conventions (e.g. how to navigate in traffic, which language to speak, or how to coordinate with teammates). A group’s conventions can be viewed as a choice of equilibrium in a coordination game. We consider the problem of an agent learning a policy for a coordination game in a simulated environment and then using this policy when it enters an existing group. When there are multiple possible conventions we show that learning a policy via multi-agent reinforcement learning (MARL) is likely to find policies which achieve high payoffs at training time but fail to coordinate with the real group into which the agent enters. We assume access to a small number of samples of behavior from the true convention and show that we can augment the MARL objective to help it find policies consistent with the real group’s convention. In three environments from the literature – traffic, communication, and team coordination – we observe that augmenting MARL with a small amount of imitation learning greatly increases the probability that the strategy found by MARL fits well with the existing social convention. We show that this works even in an environment where standard training methods very rarely find the true convention of the agent’s partners.
OUR SUMMARY
The Facebook AI research team addresses the problem of AI agents acting in line with existing conventions. Learning a policy via multi-agent reinforcement learning (MARL) results in agents that achieve high payoffs at training time but fail to coordinate with the real group. The researchers suggest solving this problem by augmenting the MARL objective with a small sample of observed behavior from the group. The experiments in three test settings (traffic, communication, and team coordination) demonstrate that this approach greatly increased the probability of the agent finding a strategy that fits with the existing group’s conventions.
WHAT’S THE CORE IDEA OF THIS PAPER?
- Without any input from an existing group, a new agent will learn policies that work in isolation but do not necessarily fit with the group’s conventions.
- To solve this problem, the authors propose a novel observationally augmented self-play (OSP) method, where the agent is trained with a joint MARL and behavioral cloning objective. In particular, the researchers suggest providing the agent with a small number of observations of existing social behavior (i.e., samples of (state, action) pairs from the test environment).
WHAT’S THE KEY ACHIEVEMENT?
- The experiments on several multi-agent situations with multiple conventions (a traffic game, a particle environment combining navigation and communication, and a Stag Hunt game) show that OSP can learn relevant conventions with a small amount of observational data.
- Moreover, with this method, the agent can learn conventions that are very unlikely to be learned using MARL alone.
WHAT DOES THE AI COMMUNITY THINK?
- The paper was awarded the Best Paper Award at AAAI-AIES 2019, one of the leading conferences in the AI Ethics research area.
WHAT ARE FUTURE RESEARCH AREAS?
- Exploring alternative algorithms for constructing agents that can learn social conventions.
- Investigating the possibility of fine-tuning the OSP training strategies during test time.
- Considering problems where agents have incentives that are partly misaligned, and thus need to coordinate on a convention in addition to solving the social dilemma.
- Extending the work into more complex environments, including interaction with humans.
WHAT ARE POSSIBLE BUSINESS APPLICATIONS?
- This work is a stepping-stone towards developing AI agents that can teach themselves to cooperate with humans. This has positive implications for chatbots, customer support agents and many other AI applications.
3. ACTIONABLE AUDITING: INVESTIGATING THE IMPACT OF PUBLICLY NAMING BIASED PERFORMANCE RESULTS OF COMMERCIAL AI PRODUCTS, BY INIOLUWA DEBORAH RAJI AND JOY BUOLAMWINI
ORIGINAL ABSTRACT
Although algorithmic auditing has emerged as a key strategy to expose systematic biases embedded in software platforms, we struggle to understand the real-world impact of these audits, as scholarship on the impact of algorithmic audits on increasing algorithmic fairness and transparency in commercial systems is nascent. To analyze the impact of publicly naming and disclosing performance results of biased AI systems, we investigate the commercial impact of Gender Shades, the first algorithmic audit of gender and skin type performance disparities in commercial facial analysis models. This paper 1) outlines the audit design and structured disclosure procedure used in the Gender Shades study, 2) presents new performance metrics from targeted companies IBM, Microsoft and Megvii (Face++) on the Pilot Parliaments Benchmark (PPB) as of August 2018, 3) provides performance results on PPB by non-target companies Amazon and Kairos and, 4) explores differences in company responses as shared through corporate communications that contextualize differences in performance on PPB. Within 7 months of the original audit, we find that all three targets released new API versions. All targets reduced accuracy disparities between males and females and darker and lighter-skinned subgroups, with the most significant update occurring for the darker-skinned female subgroup, that underwent a 17.7% – 30.4% reduction in error between audit periods. Minimizing these disparities led to a 5.72% to 8.3% reduction in overall error on the Pilot Parliaments Benchmark (PPB) for target corporation APIs. The overall performance of non-targets Amazon and Kairos lags significantly behind that of the targets, with error rates of 8.66% and 6.60% overall, and error rates of 31.37% and 22.50% for the darker female subgroup, respectively.
OUR SUMMARY
In this paper, Raji and Buolamwini investigate how the publicly available performance evaluations for commercial AI products impact the performance of the respective machine learning systems in future releases. In particular, they review how the Gender Shades study (Buolamwini and Gebru, 2018) affected the performance of the targeted facial analysis systems (Face++, Microsoft, IBM) as well as systems not covered in the study (non-targeted systems: Amazon and Kairos). The researchers observed a significant reduction in overall error for the targeted systems, especially with regard to the darker-skinned female subgroup, which is the most challenging for existing face analysis systems. The results of this research demonstrate that, if prioritized, the disparities in performance between different subgroups can be significantly minimized in a reasonable amount of time.
WHAT’S THE CORE IDEA OF THIS PAPER?
- Evaluating updated API releases of three companies targeted in the Gender Shades study by strictly following the methodology of that study.
- Investigating differences in the pre-audit and post-audit performance of the systems (overall and across different subgroups).
- Evaluating the performance of two non-targeted face analysis systems to investigate if publicly available auditing results have an impact on similar AI systems not included in the audit.
- Exploring differences in company responses to the audit results.
WHAT’S THE KEY ACHIEVEMENT?
- Demonstrating that, when reacting to publicly available performance evaluations, companies were able to significantly reduce the error rates of their models, especially for the most challenging intersectional subgroup of darker-skinned females.
- Revealing that API updates have been mostly data-driven, which implies that significant improvements have been achieved through training data diversification.
- Showing that the performance of the non-targeted companies is closer to the pre-audit performance of targeted companies than to their post-audit performance, which may imply that the systems not mentioned in the study have probably not been revised since the publication of the auditing results.
WHAT DOES THE AI COMMUNITY THINK?
- The paper received the Best Student Paper award at AAAI-AIES 2019, one of the leading conferences in the AI Ethics research area.
WHAT ARE FUTURE RESEARCH AREAS?
- Considering the confidence scores of the face analysis systems, to get a complete view of their real-world performance.
- Evaluating these systems on another balanced dataset or using metrics such as balanced error to account for class imbalances in existing benchmarks.
WHAT ARE POSSIBLE BUSINESS APPLICATIONS?
- This case study demonstrates how an external audit can be beneficial to the performance of commercial AI products: the targeted companies were able not only to significantly reduce the error gap between the best-performing and the worst-performing subgroups but also to improve the overall performance of the system with improvements observed across all subgroups.
4. THE ROLE AND LIMITS OF PRINCIPLES IN AI ETHICS: TOWARDS A FOCUS ON TENSIONS, BY JESS WHITTLESTONE, RUNE NYRUP, ANNA ALEXANDROVA, AND STEPHEN CAVE
ORIGINAL ABSTRACT
The last few years have seen a proliferation of principles for AI ethics. There is substantial overlap between different sets of principles, with widespread agreement that AI should be used for the common good, should not be used to harm people or undermine their rights, and should respect widely held values such as fairness, privacy, and autonomy. While articulating and agreeing on principles is important, it is only a starting point. Drawing on comparisons with the field of bioethics, we highlight some of the limitations of principles: in particular, they are often too broad and high-level to guide ethics in practice. We suggest that an important next step for the field of AI ethics is to focus on exploring the tensions that inevitably arise as we try to implement principles in practice. By explicitly recognizing these tensions we can begin to make decisions about how they should be resolved in specific cases, and develop frameworks and guidelines for AI ethics that are rigorous and practically relevant. We discuss some different specific ways that tensions arise in AI ethics, and what processes might be needed to resolve them.
OUR SUMMARY
The research team from the University of Cambridge points out that AI ethics is currently based on principles that are quite broad and unspecific. It is recognized that AI should be applied to the common good, shouldn’t harm people, and should respect their privacy. But how do you implement this in practice? To answer this question, the researchers recommend focusing on tensions that arise while applying AI in the real world, and discuss how these tensions should be resolved in specific cases. The paper lists four key tensions and provides some general guidelines for resolving them.
WHAT’S THE CORE IDEA OF THIS PAPER?
- There is an agreement that AI technologies should follow some specific ethics principles, including AI not being used to harm people or undermine their rights, as well as AI technologies respecting such values as fairness, privacy, and autonomy.
- However, these principles have a number of significant limitations:
- They are too general.
- They often come into conflict in practice.
- Different groups may understand the principles differently.
- The authors suggest that focusing on tensions instead of principles brings several important advantages:
- bridging the gap between principles and practice;
- acknowledging differences in values;
- highlighting areas where new solutions are needed;
- identifying ambiguities and knowledge gaps.
- Next, the research team introduces four tensions that they find central to the current applications of AI:
- Using data for service improvement and efficiency vs. respecting the privacy and autonomy of individuals.
- Increasing the accuracy of decisions and predictions vs. ensuring fairness and equal treatment.
- Enjoying the benefits of personalization in the digital world vs. enhancing solidarity and citizenship.
- Making people’s lives more convenient with automation vs. promoting self-actualization and dignity.
WHAT’S THE KEY ACHIEVEMENT?
- Explaining why focusing on tensions is important for further development of the AI ethics area.
- Introducing four key tensions in applying AI.
- Providing general guidelines for resolving the tensions.
WHAT DOES THE AI COMMUNITY THINK?
- The paper was presented at AAAI-AIES 2019, one of the leading conferences in the AI Ethics research area.
WHAT ARE FUTURE RESEARCH AREAS?
- Identifying further tensions.
- Exploring the ways to address the existing tensions between AI goals and values.
WHAT ARE POSSIBLE BUSINESS APPLICATIONS?
- The suggested approach may help in guiding the ethical application of AI systems in the real world.
5. PROBLEM FORMULATION AND FAIRNESS, BY SAMIR PASSI AND SOLON BAROCAS
ORIGINAL ABSTRACT
Formulating data science problems is an uncertain and difficult process. It requires various forms of discretionary work to translate high-level objectives or strategic goals into tractable problems, necessitating, among other things, the identification of appropriate target variables and proxies. While these choices are rarely self-evident, normative assessments of data science projects often take them for granted, even though different translations can raise profoundly different ethical concerns. Whether we consider a data science project fair often has as much to do with the formulation of the problem as any property of the resulting model. Building on six months of ethnographic fieldwork with a corporate data science team – and channeling ideas from sociology and history of science, critical data studies, and early writing on knowledge discovery in databases – we describe the complex set of actors and activities involved in problem formulation. Our research demonstrates that the specification and operationalization of the problem are always negotiated and elastic, and rarely worked out with explicit normative considerations in mind. In so doing, we show that careful accounts of everyday data science work can help us better understand how and why data science problems are posed in certain ways – and why specific formulations prevail in practice, even in the face of what might seem like normatively preferable alternatives. We conclude by discussing the implications of our findings, arguing that effective normative interventions will require attending to the practical work of problem formulation.
OUR SUMMARY
The researchers from Cornell University investigate the issue of problem formulation in data science and its implications for the fairness of data science projects. Specifically, they point out that the process of translating a business objective that a company wants to achieve to the problem formulated in terms of a target variable is very uncertain and challenging. The problem formulation process is driven by numerous factors, including available data as well as financial and time constraints, while ethical considerations are rarely addressed. Thus, to ensure greater fairness in data science projects, it is important to investigate in-depth the iterative work of problem formulation. The research team illustrates its claims with a case study from a multi-billion-dollar US-based e-commerce organization.
WHAT’S THE CORE IDEA OF THIS PAPER?
- Problem formulation in data science projects is a negotiated translation. Specifically, translation between high-level goals and tractable machine learning problems does not have a given outcome – it is elastic.
- Different problem formulations give rise to different ethical concerns.
- Translation of strategic goals into tractable problems is always imperfect as it always requires some assumptions about the world to be modeled. However, it is important to consider the consequences of different translations.
- An in-depth analysis of the data formulation process may help us understand why data science problems are posed in certain ways when it seems that more ethical alternatives are available.
WHAT’S THE KEY ACHIEVEMENT?
- Demonstrating the elasticity of the problem formulation process and its importance for the fairness of data science projects.
- Illustrating the uncertainty and difficulty of the problem formulation process with a case study.
WHAT DOES THE AI COMMUNITY THINK?
- The paper was presented at ACM FAT 2019, one of the key conferences in AI ethics.
WHAT ARE FUTURE RESEARCH AREAS?
- The authors of this paper suggest the following questions for investigation and intervention:
- Which goals are set and why?
- How goals are transformed into tractable problems?
- How and why do certain problem formulations succeed?
WHAT ARE POSSIBLE BUSINESS APPLICATIONS?
- Following the findings of this paper, companies may avoid the implementation of data science projects with undesired consequences by discussing the ethical implications of their systems at the stage of problem formulation.
6. A FRAMEWORK FOR UNDERSTANDING UNINTENDED CONSEQUENCES OF MACHINE LEARNING, BY HARINI SURESH AND JOHN V. GUTTAG
ORIGINAL ABSTRACT
As machine learning increasingly affects people and society, it is important that we strive for a comprehensive and unified understanding of how and why unwanted consequences arise. For instance, downstream harms to particular groups are often blamed on “biased data,” but this concept encompasses too many issues to be useful in developing solutions. In this paper, we provide a framework that partitions sources of downstream harm in machine learning into five distinct categories spanning the data generation and machine learning pipeline. We describe how these issues arise, how they are relevant to particular applications, and how they motivate different solutions. In doing so, we aim to facilitate the development of solutions that stem from an understanding of application-specific populations and data generation processes, rather than relying on general claims about what may or may not be “fair.”
OUR SUMMARY
Machine learning applications often result in unwanted consequences that people commonly attribute to “biased data”. The MIT research team draws our attention to the fact that this concept encompasses lots of different issues. Moreover, the data is not the only source of unfair outcomes – the ML pipeline also includes some choices and practices that can lead to unwanted effects. Thus, the researchers introduce a framework that partitions sources of downstream harm into five distinct categories. This framework provides a comprehensive terminology for communicating about ML fairness and facilitates solutions that come from a clear understanding of the source problem instead of relying on general terms, like “fair” or “biased”.
WHAT’S THE CORE IDEA OF THIS PAPER?
- There are five sources of bias in machine learning:
- Historical bias arises when the world, as it is, is biased (e.g., men-dominated image search results for the word “CEO” simply reflect that 95% of Fortune 500 CEOs are men).
- Representation bias occurs when some groups of the population are underrepresented in the training dataset. For example, models trained on ImageNet, where 45% of images come from the US and only 1% of images represent China, perform poorly on images depicting Asia.
- Measurement bias arises when there are issues with choosing or measuring the particular features of interest. The issues may come from varying granularity or quality of data across groups or oversimplification of the classification task. For example, the success of a student is often measured by a GPA score, which ignores many important indicators of success.
- Aggregation bias occurs when a one-size-fits-all model is used for groups that have different conditional distributions. For example, studies suggest that HbA1c levels, which are used for diagnosing diabetes, differ in a complex way across ethnicities and genders. Thus, a single model is not likely to be the best fit for predicting diabetes for every group in the population.
- Evaluation bias arises when evaluation and/or benchmark datasets are not representative of the target population. Such datasets encourage the development of models that only perform well on a subset of data. For example, facial recognition benchmarks used to have a very small fraction of images with dark-skinned female faces, which resulted in commercial facial recognition systems performing very badly on this subset of the population.
- Solutions for mitigating a bias need to be tailored to the specific source of the bias. For example, in the case of representation bias, we need to add more samples from the underrepresented group, while aggregation bias might be addressed with multi-task learning.
WHAT’S THE KEY ACHIEVEMENT?
- Providing a consolidated and comprehensive terminology for understanding and communicating about ML fairness.
- Facilitating solutions that arise from a clear understanding of the source of downstream harm.
WHAT ARE POSSIBLE BUSINESS APPLICATIONS?
- The introduced framework can serve as a guide for data scientists and ML engineers when designing fair ML systems.
7. FAIRWASHING: THE RISK OF RATIONALIZATION, BY ULRICH AÏVODJI, HIROMI ARAI, OLIVIER FORTINEAU, SÉBASTIEN GAMBS, SATOSHI HARA, ALAIN TAPP
ORIGINAL ABSTRACT
Black-box explanation is the problem of explaining how a machine learning model – whose internal logic is hidden to the auditor and generally complex – produces its outcomes. Current approaches for solving this problem include model explanation, outcome explanation as well as model inspection. While these techniques can be beneficial by providing interpretability, they can be used in a negative manner to perform fairwashing, which we define as promoting the false perception that a machine learning model respects some ethical values. In particular, we demonstrate that it is possible to systematically rationalize decisions taken by an unfair black-box model using the model explanation as well as the outcome explanation approaches with a given fairness metric. Our solution, LaundryML, is based on a regularized rule list enumeration algorithm whose objective is to search for fair rule lists approximating an unfair black-box model. We empirically evaluate our rationalization technique on black-box models trained on real-world datasets and show that one can obtain rule lists with high fidelity to the black-box model while being considerably less unfair at the same time.
OUR SUMMARY
Society requires AI systems to be ethically aligned, which implies fair decisions and explainable results. In this study, the researchers point out the possible pitfall behind this. Specifically, they think that there is a risk of fairwashing, when malicious decision-makers give fake explanations for their unfair decisions. To demonstrate that this risk is real, the authors introduce LaundryML, an algorithm that systematically generates fake explanations. The experiments confirm that this algorithm can generate explanations that look faithful and rationalize the unfair decisions of the black-box model.
WHAT’S THE CORE IDEA OF THIS PAPER?
- There is a risk of malicious entities promoting the false perception that a machine learning model respects some ethical principles, while in reality its results are heavily biased.
- To show that this risk is not imaginary, the authors introduce LaundryML, an algorithm that systematically generates fake explanations for an unfair black-box model:
- In the first step, the algorithm generates many explanations using a model enumeration technique.
- Next, one of these explanations is selected based on fairness metrics such as demographic parity (i.e., the algorithm picks the explanation that is the most faithful to the model with the demographic parity score within certain limits).
- The two versions of LaundryML introduced in the paper can rationalize both the model explanation and the outcome explanation.
WHAT’S THE KEY ACHIEVEMENT?
- Pointing out the risk of fairwashing in machine learning.
- Providing concrete evidence for this risk by introducing an algorithm that can create faithful and yet fake explanations that hide the real unfairness of the black-box model.
WHAT DOES THE AI COMMUNITY THINK?
- The paper was presented at ICML 2019, one of the leading conferences in machine learning.
WHAT ARE FUTURE RESEARCH AREAS?
- Investigating the general social implications of fairwashing.
- Developing techniques that can detect fairwashing by estimating whether an explanation is likely to be a rationalization.
WHERE CAN YOU GET IMPLEMENTATION CODE?
- The implementation code of LaundryML is available on GitHub.
8. WHAT’S IN A NAME? REDUCING BIAS IN BIOS WITHOUT ACCESS TO PROTECTED ATTRIBUTES, BY ALEXEY ROMANOV, MARIA DE-ARTEAGA, HANNA WALLACH, JENNIFER CHAYES, CHRISTIAN BORGS, ALEXANDRA CHOULDECHOVA, SAHIN GEYIK, KRISHNARAM KENTHAPADI, ANNA RUMSHISKY, ADAM TAUMAN KALAI
ORIGINAL ABSTRACT
There is a growing body of work that proposes methods for mitigating bias in machine learning systems. These methods typically rely on access to protected attributes such as race, gender, or age. However, this raises two significant challenges: (1) protected attributes may not be available or it may not be legal to use them, and (2) it is often desirable to simultaneously consider multiple protected attributes, as well as their intersections. In the context of mitigating bias in occupation classification, we propose a method for discouraging correlation between the predicted probability of an individual’s true occupation and a word embedding of their name. This method leverages the societal biases that are encoded in word embeddings, eliminating the need for access to protected attributes. Crucially, it only requires access to individuals’ names at training time and not at deployment time. We evaluate two variations of our proposed method using a large-scale dataset of online biographies. We find that both variations simultaneously reduce race and gender biases, with almost no reduction in the classifier’s overall true positive rate.
OUR SUMMARY
The authors introduce a novel approach to mitigating bias in online recruiting and automated hiring without access to protected attributes such as gender, age, and race. In particular, they suggest leveraging only the person’s name and then discouraging an occupation classifier from learning a correlation between the predicted probability of an individual’s occupation and a word embedding of their name. The experiments confirm the effectiveness of the proposed approach in reducing race and gender bias.
WHAT’S THE CORE IDEA OF THIS PAPER?
- The traditional methods for mitigating bias in machine learning typically rely on access to protected attributes (e.g., race, age, gender). However, these attributes may not be available or may not be legal to use, even for mitigating the bias.
- To avoid reliance on protected attributes, the researchers suggest using only individuals’ names and leveraging the societal biases encoded in word embeddings.
WHAT’S THE KEY ACHIEVEMENT?
- The proposed method is simple and powerful:
- it is applicable when protected attributes are not available;
- it eliminates the need to specify which biases are to be mitigated;
- it allows simultaneous mitigation of multiple biases, including those that relate to group intersections.
- The evaluation on several datasets demonstrates that this approach significantly reduces race and gender biases with almost no reduction in the classifier’s overall true positive rate.
WHAT DOES THE AI COMMUNITY THINK?
- The paper received the Best Thematic Paper award at NAACL-HTL, one of the leading conferences in natural language processing.
WHAT ARE FUTURE RESEARCH AREAS?
- Experimenting with the proposed method in other languages (beyond English).
WHAT ARE POSSIBLE BUSINESS APPLICATIONS?
- Even though the authors focus on mitigating biases in recruiting, the introduced approach can be applied in any domain where the people’s names are available at training time.
9. LIPSTICK ON A PIG: DEBIASING METHODS COVER UP SYSTEMATIC GENDER BIASES IN WORD EMBEDDINGS BUT DO NOT REMOVE THEM, BY HILA GONEN AND YOAV GOLDBERG
ORIGINAL ABSTRACT
Word embeddings are widely used in NLP for a vast range of tasks. It was shown that word embeddings derived from text corpora reflect gender biases in society. This phenomenon is pervasive and consistent across different word embedding models, causing serious concern. Several recent works tackle this problem, and propose methods for significantly reducing this gender bias in word embeddings, demonstrating convincing results. However, we argue that this removal is superficial. While the bias is indeed substantially reduced according to the provided bias definition, the actual effect is mostly hiding the bias, not removing it. The gender bias information is still reflected in the distances between “gender-neutralized” words in the debiased embeddings, and can be recovered from them. We present a series of experiments to support this claim, for two debiasing methods. We conclude that existing bias removal techniques are insufficient, and should not be trusted for providing gender-neutral modeling.
OUR SUMMARY
It has been demonstrated many times that word embeddings in NLP reflect gender biases in society. To address this problem, several research papers suggest reducing gender bias by zeroing the gender projection of all neutral non-gendered words on a predefined gender projection. The authors of the current paper claim that such debiasing approaches only hide the bias but don’t remove it. Specifically, even though word embeddings change in relation to the gender direction, they still keep their previous similarities and biased words are still grouped together. This claim is supported by a series of experiments.
WHAT’S THE CORE IDEA OF THIS PAPER?
- Word embeddings reflect gender biases present in society.
- Existing debiasing methods rely on the same bias definition, where the gender bias of a particular word is defined by a projection of this word onto the “gender direction”. Thus, according to this definition, if a certain word embedding is equally close to male and female gender-specific words, it is not biased.
- The idea of this paper is that bias is much more profound and systematic and cannot be removed by simply zeroing the projection of a word embedding onto the “gender direction”.
- The authors demonstrate that even after debiasing word embeddings based on the above definition, most words that had a specific bias before debiasing are still grouped together, implying that the spatial geometry of word embeddings stays largely the same.
WHAT’S THE KEY ACHIEVEMENT?
- Demonstrating that popular debiasing methods don’t remove the gender bias from word embeddings:
- male- and female-biased words still cluster together;
- it is easy to predict the implicit gender of words based on their vectors alone.
WHAT DOES THE AI COMMUNITY THINK?
- The paper was presented at NAACL-HLT 2019, one of the leading conferences in natural language processing.
WHAT ARE FUTURE RESEARCH AREAS?
- Further exploring debiasing methods that would entirely eliminate the gender bias from word embeddings.
WHAT ARE POSSIBLE BUSINESS APPLICATIONS?
- Teams that deploy potentially gender-biased AI systems can use the experiments from this paper to make sure that their systems are free from this bias.
WHERE CAN YOU GET IMPLEMENTATION CODE?
- The code for the experiments described in the paper is available on GitHub.
10. STREET–LEVEL ALGORITHMS: A THEORY AT THE GAPS BETWEEN POLICY AND DECISIONS, BY ALI ALKHATIB AND MICHAEL BERNSTEIN
ORIGINAL ABSTRACT
Errors and biases are earning algorithms increasingly malignant reputations in society. A central challenge is that algorithms must bridge the gap between high-level policy and on-the-ground decisions, making inferences in novel situations where the policy or training data do not readily apply. In this paper, we draw on the theory of street-level bureaucracies, how human bureaucrats such as police and judges interpret policy to make on-the-ground decisions. We present by analogy a theory of street-level algorithms, the algorithms that bridge the gaps between policy and decisions about people in a socio-technical system. We argue that unlike street-level bureaucrats, who reflexively refine their decision criteria as they reason through a novel situation, street-level algorithms at best refine their criteria only after the decision is made. This loop-and-a-half delay results in illogical decisions when handling new or extenuating circumstances. This theory suggests designs for street-level algorithms that draw on historical design patterns for street-level bureaucracies, including mechanisms for self–policing and recourse in the case of error.
OUR SUMMARY
Compared to humans, algorithmic systems seem to be more prone to errors that are very frustrating for the people affected. To understand why this might be the case, it is necessary to realize that the policies are usually implemented by street-level bureaucrats like police officers and judges, who make important decisions by interpreting the given policy for both familiar and new situations. Similarly, algorithms that directly interact and make decisions about people can be referred to as street-level algorithms. The Stanford research team claim that street-level algorithms make frustrating decisions more often than street-level bureaucrats because humans, when encountered with a new or marginal case, can refine their decision boundaries before making the decision, while algorithms can refine these boundaries only after the decision is made and the system has received feedback or additional training data.
WHAT’S THE CORE IDEA OF THIS PAPER?
- Algorithms that directly interact with people and make decisions are more error-prone than humans in the same position.
- The reason is that:
- Street-level bureaucrats, like judges, teachers, or police officers, can reflexively refine their decision criteria when facing a novel or marginal case.
- Street-level algorithms are not that flexible and at best can refine decision criteria only after the decision is made and they have received some feedback or new training data.
- Thus, the designs of street-level algorithms should consider this problem and include the mechanisms for self-policing and recourse in case of a wrong decision.
WHAT’S THE KEY ACHIEVEMENT?
- Introducing a valid explanation for algorithmic systems making frustrating decisions more often than humans.
- Suggesting design implications that can address the issue of algorithms being inadaptive to novel cases. These include:
- providing the user with information that can help understand whether a system has made a mistake (e.g. if YouTube denies monetization for a certain video, it can show the user other videos that the system finds similar to the denied one – this can help the user to understand if the video was misclassified);
- self-policing (e.g., building oversight into algorithms);
- recourse and appeals in case of errors.
WHAT DOES THE AI COMMUNITY THINK?
- The paper received the Best Paper Award at CHI 2019, the premier conference in human-computer interaction.
WHAT ARE FUTURE RESEARCH AREAS?
- Progressing towards systems that can better consider the needs of stakeholders.
WHAT ARE POSSIBLE BUSINESS APPLICATIONS?
- Designing machine learning systems that include oversight components and allow for appeals and recourse in case of wrong decisions.
11. AVERAGE INDIVIDUAL FAIRNESS: ALGORITHMS, GENERALIZATION AND EXPERIMENTS, BY MICHAEL KEARNS, AARON ROTH, AND SAEED SHARIFI-MALVAJERDI
ORIGINAL ABSTRACT
We propose a new family of fairness definitions for classification problems that combine some of the best properties of both statistical and individual notions of fairness. We posit not only a distribution over individuals, but also a distribution over (or collection of) classification tasks. We then ask that standard statistics (such as error or false positive/negative rates) be (approximately) equalized across individuals, where the rate is defined as an expectation over the classification tasks. Because we are no longer averaging over coarse groups (such as race or gender), this is a semantically meaningful individual-level constraint. Given a sample of individuals and classification problems, we design an oracle-efficient algorithm (i.e. one that is given access to any standard, fairness-free learning heuristic) for the fair empirical risk minimization task. We also show that given sufficiently many samples, the ERM solution generalizes in two directions: both to new individuals, and to new classification tasks, drawn from their corresponding distributions. Finally we implement our algorithm and empirically verify its effectiveness.
OUR SUMMARY
The researchers from the University of Pennsylvania suggest combining statistical and individual notions of fairness to generate a new family of fairness definitions for classification problems. First of all, they assume that each individual is subject to decisions made by many classification systems. Then, they require that the error rates, or false-positive rates, or false-negative rates, are equal across all individuals. Finally, to satisfy this guarantee, they derive a new oracle-efficient algorithm for learning Average Individual Fairness, called AIF-Learn. The algorithm solves the fair empirical risk minimization task with the solution being generalizable to both new individuals and new classification tasks. The empirical evaluation verifies the effectiveness of the introduced algorithm.
WHAT’S THE CORE IDEA OF THIS PAPER?
- The authors show that existing fairness definitions can be divided into two groups:
- Statistical fairness definitions that can be easily enforced on arbitrary data distributions but do not have strong semantics.
- Individual fairness definitions that have very strong individual-level semantics but require strong realizability assumptions.
- The paper introduces an alternative definition of individual fairness that does not require assumptions to be imposed on the data generating process:
- In the suggested setting, individuals are subject to many classification tasks over a given period of time (e.g., users are exposed to multiple target ads when using a particular platform).
- This setting is modeled by assuming that in addition to the unknown distribution over individuals, there is also an unknown distribution over classification problems.
- The model is aimed at ensuring that the error rates or false positive/negative rates are equalized across all individuals.
- This fairness definition is implemented with an oracle-efficient algorithm, called AIF-Learn.
- The algorithm assumes the existence of “oracles”, implemented with a heuristic that can solve weighted classification problems in the absence of fairness constraints.
WHAT’S THE KEY ACHIEVEMENT?
- The guarantees of the AIF-Learn algorithm hold both in-sample and also out of sample, implying its generalizability to new individuals and classification tasks.
- The empirical evaluation of the AIF-Learn algorithm demonstrates that it:
- shows strong convergence properties suggested by theory;
- outperforms the random predictions in terms of both average errors and individual errors.
WHAT DOES THE AI COMMUNITY THINK?
- The paper was accepted for oral presentation at NeurIPS 2019, the leading conference in artificial intelligence.
WHAT ARE THE POSSIBLE BUSINESS APPLICATIONS?
- The introduced approach can improve the fairness of AI classification systems across industries and applications.
12. ENERGY AND POLICY CONSIDERATIONS FOR DEEP LEARNING IN NLP, BY EMMA STRUBELL, ANANYA GANESH, ANDREW MCCALLUM
ORIGINAL ABSTRACT
Recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data. These models have obtained notable gains in accuracy across many NLP tasks. However, these accuracy improvements depend on the availability of exceptionally large computational resources that necessitate similarly substantial energy consumption. As a result these models are costly to train and develop, both financially, due to the cost of hardware and electricity or cloud compute time, and environmentally, due to the carbon footprint required to fuel modern tensor processing hardware. In this paper we bring this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP. Based on these findings, we propose actionable recommendations to reduce costs and improve equity in NLP research and practice.
OUR SUMMARY
In this paper, the researchers from the University of Massachusetts Amherst draw the attention of the research community to the huge amounts of energy consumption associated with training large neural networks. The authors focus specifically on the latest NLP models and estimate CO2 emissions from training such models as well as the corresponding cloud computing costs. Thus, training one model on GPU, with tuning and experimentation, results in CO2 emissions that are comparable to the two-year carbon footprint of an average American. Furthermore, the researchers use the case study of developing a state-of-the-art NLP model to show that the relevant cloud computing costs may account for $103–350K, amounts that are rarely available to academic researchers.
WHAT’S THE CORE IDEA OF THIS PAPER?
- Modern language models achieve considerable gains in accuracy across NLP tasks but this comes at huge computing costs and energy consumption.
- The energy used for powering GPUs or TPUs during weeks or months of model training results in considerable carbon emissions.
- The huge cloud computing costs required for training state-of-the-art models are often unattainable to academic researchers.
- To overcome these challenges, the authors suggest:
- Reporting training time and sensitivity to hyperparameters in research papers to allow subsequent consumers to assess whether they have access to the required computational resources.
- Providing academic researchers with equitable access to computation resources by investing in shared computing centers.
- Prioritizing computationally efficient hardware and algorithms.
WHAT’S THE KEY ACHIEVEMENT?
- Drawing attention to the substantial amount of energy consumption associated with training the latest NLP models.
- Suggesting actionable recommendations for reducing the financial costs and environmental impact of training machine learning models.
WHAT DOES THE AI COMMUNITY THINK?
- The paper was presented at ACL 2019, one of the leading conferences in natural language processing.
WHAT ARE FUTURE RESEARCH AREAS?
- Exploring ways to reduce energy consumption when developing state-of-the-art models (e.g., by using a Bayesian search instead of grid search).
WHAT ARE POSSIBLE BUSINESS APPLICATIONS?
- Prioritizing energy-efficient hardware and algorithms when developing AI systems.
We want to give special thanks to Rachel Thomas, director at USF Center for Applied Data Ethics, and Timnit Gebru, research scientist at Google AI, for generously offering their expertise in curating the most important AI ethics research presented in 2019.
If you like these research summaries, you might be also interested in the following articles:
IT may have an age problem
Despite record high job openings and difficulty recruiting talent, information technology is taking a pass on older workers. The sector notoriously skews young, with employees aged 22 to 44 comprising 61% of IT compared to 49% of the overall U.S. workforce. Employers say they’re hesitant to on board older workers due to skill discrepancies and costs, says The Wall Street Journal, but with 80% of employers also citing recruiting tech talent as one of their biggest business challenges, companies may be be finding it harder to overlook candidates.
“Not sure why this is happening. As an older IT talent, I usually stay informed and use new technology which was typically developed through other (older) technologies that I have experienced. Hence, if I was hiring, I would definitely way the strengths of both boomers and millenniums and have both at my side (I am Gen X).”
Soul Dog
Which Breed of Dog Guards Your Soul?
You Got:
German Shepherd
Your guardian needs to be a dog that is smart, attentive, easily trained, and not too high-maintenance. Luckily for you, the German shepherd guards your soul. With their high level of intelligence, you’ll never make a bad decision. You will always be guided to make decisions that benefit everyone around you.
Keyword Density Checker Tool
I used this tool on my webpage, https://curtloong.com, and got the following;
“Content matters, not only from an end user perspective, but also from a search engine perspective. The words used on a webpage, including what type (keywords or stop words), how they are used (alone or within phrases) and where they are used (link text or non-link body text), can all influence the value of the page in search. Keyword Density is the percentage of occurrence of your keywords to the text in the rest of your webpage. It is important for your main keywords to have the correct keyword density to rank well in Search Engines. This Keyword Density Checker Tool help the webmasters analyse the keyword density of their webpages as it display of the most important keywords from your site. This is a very simple tool to use. Enter the Webpage URL, Press ‘Check’ button, and the keyword density check will be done automatically.”
Keyword | Occurrence | Density |
---|---|---|
years | 8 | 0.9% |
short | 8 | 0.9% |
public | 7 | 0.8% |
experience | 7 | 0.8% |
environmental | 6 | 0.7% |
policy | 6 | 0.7% |
firms | 6 | 0.7% |
website | 5 | 0.6% |
management | 5 | 0.6% |
development | 5 | 0.6% |
market | 5 | 0.6% |
meaning | 4 | 0.5% |
taller | 4 | 0.5% |
degree | 4 | 0.5% |
search | 4 | 0.5% |
learning | 4 | 0.5% |
administration | 4 | 0.5% |
design | 4 | 0.5% |
marketing | 3 | 0.3% |
enjoy | 3 | 0.3% |
service | 3 | 0.3% |
affairs | 3 | 0.3% |
privacy | 3 | 0.3% |
online | 3 | 0.3% |
including | 3 | 0.3% |
contact | 3 | 0.3% |
tampa | 2 | 0.2% |
nineteen | 2 | 0.2% |
shortened | 2 | 0.2% |
discover | 2 | 0.2% |
login | 2 | 0.2% |
college | 2 | 0.2% |
curtlong | 2 | 0.2% |
applications | 2 | 0.2% |
servers | 2 | 0.2% |
languages | 2 | 0.2% |
working | 2 | 0.2% |
necessary | 2 | 0.2% |
business | 2 | 0.2% |
comment | 2 | 0.2% |
cookies | 2 | 0.2% |
large | 2 | 0.2% |
dictionary | 2 | 0.2% |
interesting | 2 | 0.2% |
means | 2 | 0.2% |
shorttallernus | 2 | 0.2% |
software | 2 | 0.2% |
websites | 2 | 0.2% |
origin | 2 | 0.2% |
seventeenth | 2 | 0.2% |
digital | 2 | 0.2% |
today | 2 | 0.2% |
different | 2 | 0.2% |
makes | 2 | 0.2% |
gaelic | 2 | 0.2% |
program | 2 | 0.2% |
engineering | 2 | 0.2% |
federal | 2 | 0.2% |
learned | 2 | 0.2% |
usual | 2 | 0.2% |
About Webpage Speed Test Tool
Speed may be a crucial part of running a flourishing web {site} and will continually be a priority for site managers. additionally to providing a lag-free and responsive user expertise, a quick loading web site additionally encompasses a direct impact on the performance of the web site. quicker loading websites have the benefit of higher user engagement, higher conversion rates, higher SEO rankings and far additional. A slow web site or maybe a delay once loading an internet page will cause you to lose guests and thus additionally potential customers. rising web site speed is crucial, however it are often a frightening method that involves several moving components between onsite improvement to network and accessibility configurations. Testing and optimizing page speed is important. create use of this Webpage Speed take a look at Tool analyze the performance of your web site.
Long Tail Keywords for Web Design
I found a tool that helped with long-tailed keywords based upon an entry. I inserted web design and got thousands of long tail keywords. I cropped them for correction and the following is what I got:
affordable web design tampa |
affordable web design tampa fl |
best tampa web design company |
best web design companies tampa |
responsive web design xd |
responsive web design zoom |
self employed web designer salary |
self employed web designer salary uk |
small business web design tampa |
tampa bay web design firm |
tampa pc web design |
tampa web design services |
tampa web page design company |
web design & development salary |
web design 1920 x 1080 |
web design accessibility |
web design adobe |
web design agency |
web design agency near me |
web design agreement |
web design analyst salary |
web design and development |
web design and development companies |
web design and development degree |
web design and development jobs |
web design and development salary |
web design and hosting |
web design and marketing |
web design app |
web design applications |
web design art |
web design articles |
web design articles 2019 |
web design assistant salary |
web design atlanta |
web design average salary |
web design awards |
web design bachelor degree |
web design bachelor degree salary |
web design background |
web design basics |
web design beginner |
web design benefits |
web design benefits salary |
web design best practices |
web design best practices 2019 |
web design best practices checklist |
web design blog |
web design bls |
web design books |
web design bootcamp |
web design brass paparazzi |
web design breadcrumbs |
web design business |
web design business card |
web design business for sale |
web design business names |
web design business plan |
web design buttons |
web design career |
web design career salary |
web design case study |
web design certificate |
web design certificate online |
web design classes |
web design classes free |
web design classes tampa |
web design client questionnaire |
web design coding |
web design colleges |
web design color schemes |
web design companies in tampa fl |
web design company |
web design company names |
web design company near me |
web design company tampa |
web design concepts |
web design conferences 2019 |
web design consultant salary |
web design contract |
web design contract pdf |
web design cost |
web design course salary |
web design courses |
web design courses online |
web design dallas tx |
web design definition |
web design degree |
web design degree florida |
web design degree near me |
web design degree online |
web design degree programs |
web design degree salary |
web design deliverables |
web design depot |
web design description |
web design developer salary |
web design development |
web design dictionary |
web design dimensions |
web design dimensions 2019 |
web design discord |
web design diy |
web design document |
web design dreamweaver |
web design easy |
web design ecommerce |
web design editor |
web design education |
web design elements |
web design em |
web design email pitch |
web design email template |
web design engineer |
web design engineer salary |
web design entry level |
web design entry level salary |
web design entry salary |
web design essentials |
web design estimate |
web design estimate salary |
web design estimate template |
web design events |
web design examples |
web design examples 2019 |
web design exercises |
web design experience |
web design eyebrow |
web design facts |
web design fees |
web design final exam |
web design firms |
web design firms near me |
web design flyer |
web design footer |
web design for beginners |
web design for business |
web design for developers |
web design for dummies |
web design for kids |
web design for small businesses |
web design forms |
web design framework |
web design free |
web design free courses |
web design freebies |
web design freelance |
web design freelance rates |
web design gallery |
web design games |
web design generator |
web design georgia |
web design gif |
web design gigs |
web design glossary |
web design goals |
web design godaddy |
web design golden ratio |
web design google |
web design gradients |
web design graduate programs |
web design graphics |
web design grid |
web design grid layout |
web design grid system |
web design group |
web design guide |
web design gutter |
web design hamburger |
web design hamburger menu |
web design hashtags |
web design header |
web design help |
web design helper |
web design hero |
web design hero image |
web design hierarchy |
web design high school |
web design high school course |
web design history |
web design homepage |
web design hourly rate |
web design hourly rate 2019 |
web design how much to charge |
web design how to |
web design html |
web design html and css |
web design html template |
web design icon |
web design ide |
web design ideas |
web design ideas 2019 |
web design illustration |
web design images |
web design in 4 minutes |
web design in xcode |
web design industry |
web design infographic |
web design information |
web design inspiration |
web design inspiration 2019 |
web design inspiration sites |
web design intake form |
web design internships |
web design interview questions |
web design introductory 5th edition answers |
web design invoice |
web design ipad pro |
web design iphone x |
web design is my passion |
web design jargon |
web design javascript |
web design job boards |
web design job descriptions |
web design job market |
web design job openings |
web design job outlook |
web design job titles |
web design jobs |
web design jobs chicago |
web design jobs entry level |
web design jobs from home |
web design jobs in tampa |
web design jobs in tampa florida |
web design jobs los angeles |
web design jobs near me |
web design jobs nyc |
web design jobs online |
web design jobs remote |
web design jobs salary |
web design jobs tampa |
web design jobs tampa bay |
web design jobs tampa fl |
web design jokes |
web design kahoot |
web design kalamazoo |
web design kalispell |
web design kansas city mo |
web design karachi |
web design keene nh |
web design kent |
web design kenya |
web design kerala |
web design key terms |
web design keywords |
web design keywords list |
web design khan academy |
web design killeen tx |
web design kingston ny |
web design kissimmee |
web design kit |
web design knowledge test |
web design knoxville |
web design kya hai |
web design lakeland fl |
web design landing pages |
web design languages |
web design languages 2019 |
web design laptop |
web design layout |
web design layout inspiration |
web design layout tools |
web design lead generation |
web design learn |
web design ledger |
web design lesson plans |
web design lessons |
web design library |
web design lincoln ne |
web design lists |
web design llc |
web design logo |
web design logo ideas |
web design los angeles |
web design mac os x |
web design mac os x software |
web design magazines |
web design major |
web design manager |
web design marketing |
web design masters |
web design meaning |
web design medium |
web design meme |
web design menu |
web design methodology |
web design minimalist |
web design minneapolis |
web design mission statement |
web design mistakes to avoid |
web design mockup |
web design mockup free |
web design mockup tool |
web design modal |
web design mood board |
web design museum |
web design naics |
web design names |
web design naming conventions |
web design nashville |
web design navigation |
web design navigation bar |
web design navigation inspiration |
web design nc |
web design near me |
web design nearby |
web design needed |
web design networking |
web design new zealand |
web design news |
web design newsletter |
web design niches |
web design ninja |
web design no coding |
web design northern virginia |
web design notebook |
web design ny |
web design ohio |
web design omaha |
web design on ipad |
web design on ipad pro |
web design on mac |
web design on wordpress |
web design online |
web design online certificate |
web design online class |
web design online course |
web design online degree |
web design online school |
web design options |
web design or graphic design |
web design or web development |
web design orlando |
web design os x |
web design outline |
web design outlook |
web design outsource |
web design overview |
web design packages |
web design patterns |
web design pay |
web design pdf |
web design photoshop |
web design platforms |
web design playground |
web design podcast |
web design portfolio |
web design practice |
web design pricing |
web design pricing guide |
web design principles |
web design process |
web design programs |
web design project ideas |
web design project management |
web design projects |
web design proposal |
web design proposal template |
web design qa |
web design qa checklist |
web design qualifications |
web design question |
web design questionnaire |
web design questionnaire form |
web design questionnaire pdf |
web design questionnaire template |
web design questions and answers |
web design questions for clients |
web design quick links |
web design quiz |
web design quiz questions |
web design quizlet |
web design quotation |
web design quote example |
web design quote form |
web design quote generator |
web design quote sample |
web design quotes |
web design raleigh |
web design rankings |
web design rate sheet |
web design rates |
web design reddit |
web design remote |
web design remote jobs |
web design requirements |
web design research |
web design reseller |
web design resources |
web design responsive |
web design resume |
web design resume samples |
web design resume template |
web design retainer |
web design reviews |
web design roles |
web design rubric |
web design rules |
web design salary |
web design salary 2019 |
web design salary college |
web design sarasota |
web design school |
web design school online |
web design school tampa |
web design seattle |
web design services |
web design services near me |
web design sites |
web design skills |
web design software |
web design software free |
web design software os x |
web design software x5 |
web design st petersburg fl |
web design standards |
web design standards 2019 |
web design stara zagora |
web design statistics |
web design studio |
web design style guide |
web design styles |
web design systems |
web design tabs |
web design tampa |
web design tampa bay |
web design tampa fl |
web design tampa florida |
web design techniques |
web design templates |
web design terms |
web design testimonials |
web design textbook |
web design theory |
web design timeline |
web design tips |
web design tools |
web design tools free |
web design topics |
web design trade school |
web design training |
web design trends |
web design trends 2019 |
web design trends 2020 |
web design tutorial |
web design ucf |
web design udacity |
web design uf |
web design ui |
web design ui best practices |
web design ui kit |
web design ui ux |
web design unit 4 lesson 4 |
web design unit 4 lesson 5 |
web design university |
web design upwork |
web design usability |
web design user experience |
web design using html |
web design using python |
web design using sketch |
web design using wix |
web design using wordpress |
web design using xcode |
web design using xd |
web design ux |
web design ux best practices |
web design va |
web design vancouver |
web design vectors |
web design video |
web design video background |
web design villa park |
web design virtual assistant |
web design visalia |
web design vocab |
web design vocabulary |
web design vocational school |
web design volunteer |
web design vs app design |
web design vs coding |
web design vs graphic design |
web design vs ui design |
web design vs ux design |
web design vs web development |
web design vs web development reddit |
web design vs web programming |
web design wallpaper |
web design web development |
web design website template |
web design websites |
web design weekly |
web design white space |
web design wikipedia |
web design wireframe |
web design wireframe tool |
web design with html css javascript and jquery |
web design with python |
web design without coding |
web design without degree |
web design wix |
web design wordpress |
web design wordpress theme |
web design words |
web design work |
web design workflow |
web design workshop |
web design x5 |
web design xampp |
web design xara |
web design xd |
web design xhtml |
web design xml |
web design xml tutorial |
web design yahoo |
web design yakima |
web design yardley pa |
web design yearly salary |
web design yellow |
web design yelp |
web design yeovil |
web design yerevan |
web design yoga |
web design yogyakarta |
web design york |
web design york pa |
web design yorke peninsula |
web design you |
web design your way |
web design yourself |
web design youtube channels |
web design youtube tutorial |
web design yuma az |
web design z index |
web design z layout |
web design zagreb |
web design základy |
web design zambia |
web design zanzibar |
web design zaplata |
web design zen |
web design zeplin |
web design zertifikat |
web design zimbabwe |
web design zirakpur |
web design znacenje |
web design zoom |
web design zoom effect |
web design zug |
web design zurich |
web designer 3 years experience salary |
web designer 5 years experience salary |
web designer annual salary in india |
web designer average salary canada |
web designer average salary in india |
web designer average salary uk |
web designer basic salary |
web designer beginning salary |
web designer career salary in india |
web designer certificate salary |
web designer developer salary dubai |
web designer employee salary in india |
web designer front end developer salary |
web designer job description |
web designer near me |
web designer salary |
web designer salary 2019 |
web designer salary ahmedabad |
web designer salary alberta |
web designer salary amsterdam |
web designer salary associate degree |
web designer salary atlanta |
web designer salary austin tx |
web designer salary australia |
web designer salary average |
web designer salary bachelor’s degree |
web designer salary baltimore |
web designer salary bangalore |
web designer salary bangkok |
web designer salary bay area |
web designer salary bc |
web designer salary belfast |
web designer salary belgium |
web designer salary berlin |
web designer salary berlin germany |
web designer salary bls |
web designer salary boston |
web designer salary by state |
web designer salary calgary |
web designer salary california |
web designer salary canada |
web designer salary cape town |
web designer salary cebu |
web designer salary charlotte nc |
web designer salary chicago |
web designer salary colorado |
web designer salary colorado springs |
web designer salary columbus ohio |
web designer salary dallas |
web designer salary dc |
web designer salary denver |
web designer salary details |
web designer salary dubai |
web designer salary dublin |
web designer salary entry level |
web designer salary europe |
web designer salary experience |
web designer salary florida |
web designer salary freelance |
web designer salary in abroad |
web designer salary in america |
web designer salary in bahrain |
web designer salary in bangladesh |
web designer salary in bhopal |
web designer salary in bhubaneswar |
web designer salary in canada |
web designer salary in canada per month |
web designer salary in chandigarh |
web designer salary in chennai |
web designer salary in delhi |
web designer salary in delhi ncr |
web designer salary in denmark |
web designer salary in dubai quora |
web designer salary in ecuador |
web designer salary in egypt |
web designer salary in england |
web designer salary in india |
web designer salary indianapolis |
web designer salary job description |
web designer salary los angeles |
web designer salary miami |
web designer salary nyc |
web designer salary per hour |
web designer salary per month |
web designer salary per year |
web designer salary reddit |
web designer salary san diego |
web designer salary san francisco |
web designer salary san jose |
web designer salary seattle |
web designer salary south africa |
web designer salary tampa |
web designer salary uk |
web designer tampa fl |
web designers in tampa |
web developers salary canada |
web developers salary in europe |
webex design |
After Google
The following is Chapter 5: ‘The 10 Laws of the Cryptocosm’ from: Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy (Regnery Publishing) by George Gilder
Google’s security foibles, its “aggregate and advertise” model, its avoidance of price signals, its silos of customer data, and its visions of machine mind are unlikely to survive the root-and-branch revolution of distributed peer-to-peer technology, which I call the “cryptocosm.”
Today, all around us, scores of thousands of engineers and entrepreneurs are contriving a new system of the world that transcends the limits and illusions of the Google realm.
In the Google era, the prime rule of the Internet is “Communications first.” That means everything is free to be copied, moved, and mutated. While most of us welcome “free” on the understanding that it means “no charge,” what we really want is to get what we ordered rather than what the authority chooses to provide. In practice, “free” means insecure, amorphous, unmoored, and changeable from the top. This communications-first principle served us well for many years.
The Internet is a giant asynchronous replicator that communicates by copying. Regulating all property rights in the information economy are the copy-master kings, chiefly at Google.
In this system, security is a function of the network, applied from the top, rather than a property of the device and its owner. So everything rises to the top, the Googleplex, which achieves its speed and efficiency by treating its users as if they were making random choices. That’s the essence of the mathematical model behind their search engine. You are a random function of Google.
But you are not random; you are a unique genetic entity that can- not be factored back into an egg and a sperm. You are unbreakably encrypted by biology. These asymmetrical natural codes are the ruling model and metaphor for enduring security. You start by defining not the goal but the ground state. Before you build the function or the structure, you build the foundation. It is the ultimate non-random reality. The ground state is you.
1. Utterly different from Google’s rule of communications first is the law of the Cryptocosm. The first rule is the barn-door law: “Security first.” Security is not a procedure or a mechanism; it is an architecture. Its keys and doors, walls and channels, roofs and windows define property and privacy at the device-level. They determine who can go where and do what. Security cannot be retrofitted, patched, or improvised from above.
For you, security means not some average level of surveillance at the network level but the safety of your own identity, your own device, and your own property. You occupy and control a specific time and space. You cannot be blended or averaged. Just as you are part of a biological ledger, inscribed through time in DNA codes and irreversible by outside power, your properties and transactions compose an immutable ledger. Just as you are bound in time, every entry in the cryptocosmic ledger is timestamped.
2. The second rule of the cryptocosm derives from the first: “Centralization is not safe.” Secure positions are decentralized ones, as human minds and DNA code are decentralized. Darwin’s mistake, and Google’s today, is to imagine that identity is a blend rather than a code—that machines can be a singularity, but human beings are random outcomes.
Centralization tells thieves what digital assets are most valuable and where they are. It solves their most difficult problems. Unless power and information are distributed throughout the system peer to peer, they are vulnerable to manipulation and theft from the blenders at the top.
3. The third rule is “Safety last.”1 Unless the architecture achieves its desired goals, safety and security are irrelevant. Security is a crucial asset of a functional system. Requiring the system to be safe at every step of construction results in a kludge: a machine too complex to use.
4. The fourth rule is “Nothing is free. This rule is fundamental to human dignity and worth. Capitalism requires companies to serve their customers and to accept their proof of work, which is money.
5. The fifth rule is “Time is the final measure of cost.” Time is what remains scarce when all else becomes abundant: the speed of light and the span of life. The scarcity of time trumps an abundance of money.
6. The sixth rule: “Stable money endows humans with dignity and control.” Stable money reflects the scarcity of time. Without stable money, an economy is governed only by time and power.
7. The seventh rule is the “asymmetry law,” reproducing biological asymmetry. A message coded by a public key can be decrypted only by the private key, but the private key cannot be calculated from the public key. Asymmetric codes that are prohibitively difficult to break but easy to verify give power to the people. By contrast, symmetrical encryption gives power to the owners of the most costly computers.
8. The eighth rule is “Private keys rule.” They are what is secure. They cannot be blended or changed from on top any more than your DNA can be changed or blended from above.
9. The ninth rule is “Private keys are held by individual human beings, not by governments or Google.” Private keys enforce property rights and identities. In a challenge-response interaction, the challenger takes the public key and encrypts a message. The private responder proves identity by decryption, amending, and returning the message encrypted anew with his private key. This process is a digital signature. By decrypting the new message with a public key, the final recipient is assured that the sender is who he says he is. The document has been digitally signed.
Ownership of private keys distributes power. The owner of a private key (id) can always respond to a challenge by proving ownership of the identity of a public address and the contents of a public ledger. Thus, in response to government claims and charges, the owner of the private key can prove his work and his record. By signing with a private key, the owner can always prove title to an item of property defined by a public key on a digital ledger.
10. The tenth rule is “Behind every private key and its public key is the human interpreter.” A focus on individual human beings makes meaningful security.
How will your experience of the world change when these ten rules define the new system?
Google is hierarchical. Life after Google will be hierarchical. Google is top-down. Life after Google will be bottom-up. Google rules by the insecurity of all the lower layers in the stack. A porous stack enables the money and power to be sucked up to the top. In life after Google, a secure ground state in the individual human being, registered and timestamped in a digital ledger, will prevent this suction of hierarchical power.
Whereas Google now controls your information and uses it free of charge, you will be master of your own information and charge for it freely. Try the Brave Browser of Brendan Eich, formerly of Mozilla and the author of Javascript. It gives you power over your data and enables you to charge for them.
Whereas Google envisages an era of machine dominance through artificial intelligence, you will rule your machines, and they will serve you as intelligent, willing slaves. You will be the “oracle” that programs your life and dictates to your tools.
Whereas Google’s “free world” tries to escape the laws of scarcity and the webs of price, you will live in a world brimming with information on the real costs and most efficient avail abilities of what you want and need. The proof of your work will trump the claims of top-down speed and hierarchical power. The crude imperatives of “free” will give way to the calibrated voluntary exchanges of free markets and micro payments.
Whereas the Google world strains you through sieves of diversity and runs you through blenders of conformity, the new world will subsist on the foundation realities of individual uniqueness and choice. Whereas the Google world is stifling entrepreneurs’ access to the public markets through initial public offerings, which are down 90 percent in two decades, the new world will offer an array of new paths to enterprise. From initial coin offerings and token issues to crowd- funded projects, new financial devices are already empowering a new generation of entrepreneurs. The queues of abject “unicorns”—privately held start-ups worth a billion dollars or more—outside the merger and acquisition offices of Google and its rivals will be dispersed, replaced by herds of “gazelles” headed for public markets at last.
Whereas Google attempts to capture your eyeballs with ubiquitous advertisements, you will see advertisements at your own volition, when you want them, and you will be paid for your time and attention. Again, Brave is the leader of this movement.
Money is not a magic wand but a measuring stick, not wealth but a gauge of it. Whereas money in the Google era is fodder for a five- trillion-dollar-a-day currency exchange—that’s seventy-five times the amount of the world’s trade in goods and services—you will command unmediated money that measures value rather than manipulates it. Whereas the Google world is layered with middlemen and trusted third parties, you will deal directly with others around the globe with scant fees or delays.
Emerging is a peer-to-peer swarm of new forms of direct transactions beyond national borders and new forms of Uber and Airbnb beyond corporate gouges. Whereas the Google world confines you to one place and time and life, the new world will open up new dimensions and options of new life and experience where the only judge is the sovereign you.
Does the promise that human dignity will once again take its place on the Internet and that human beings will be masters of the cryptocosm sound too good to be true?
If these principles are enigmatic today, to explain their sources and ultimate success, we must, as Caltech’s Carver Mead tells us, “listen to the technology and find out what it is telling us.”
Add Story to Your Products
Inject stories into whatever you do or sell. Stories sell for all digital marketing, facts tell, stories sell.
Mr. Pickles 360 Video (Funny)
This is my first video with my 360 degree camera. It is my best friend (Mr. Pickles), my wife and me in the park. I have gotten better with this device and can do them for you too.
Why Is the Human Brain so Efficient?
How massive parallelism lifts the brain’s performance above that of AI.
The brain is complex; in humans it consists of about 100 billion neurons, making on the order of 100 trillion connections. It is often compared with another complex system that has enormous problem-solving power: the digital computer. Both the brain and the computer contain a large number of elementary units—neurons and transistors, respectively—that are wired into complex circuits to process information conveyed by electrical signals. At a global level, the architectures of the brain and the computer resemble each other, consisting of largely separate circuits for input, output, central processing, and memory.1
Which has more problem-solving power—the brain or the computer? Given the rapid advances in computer technology in the past decades, you might think that the computer has the edge. Indeed, computers have been built and programmed to defeat human masters in complex games, such as chess in the 1990s and recently Go, as well as encyclopedic knowledge contests, such as the TV show Jeopardy! As of this writing, however, humans triumph over computers in numerous real-world tasks—ranging from identifying a bicycle or a particular pedestrian on a crowded city street to reaching for a cup of tea and moving it smoothly to one’s lips—let alone conceptualization and creativity.
So why is the computer good at certain tasks whereas the brain is better at others? Comparing the computer and the brain has been instructive to both computer engineers and neuroscientists. This comparison started at the dawn of the modern computer era, in a small but profound book entitled The Computer and the Brain, by John von Neumann, a polymath who in the 1940s pioneered the design of a computer architecture that is still the basis of most modern computers today.2 Let’s look at some of these comparisons in numbers (Table 1).
The computer also has huge advantages over the brain in the precision of basic operations. The computer can represent quantities (numbers) with any desired precision according to the bits (binary digits, or 0s and 1s) assigned to each number. For instance, a 32-bit number has a precision of 1 in 232 or 4.2 billion. Empirical evidence suggests that most quantities in the nervous system (for instance, the firing frequency of neurons, which is often used to represent the intensity of stimuli) have variability of a few percent due to biological noise, or a precision of 1 in 100 at best, which is millionsfold worse than a computer.5
A pro tennis player can follow the trajectory of a ball served at a speed up to 160 mph.
The calculations performed by the brain, however, are neither slow nor imprecise. For example, a professional tennis player can follow the trajectory of a tennis ball after it is served at a speed as high as 160 miles per hour, move to the optimal spot on the court, position his or her arm, and swing the racket to return the ball in the opponent’s court, all within a few hundred milliseconds. Moreover, the brain can accomplish all these tasks (with the help of the body it controls) with power consumption about tenfold less than a personal computer. How does the brain achieve that? An important difference between the computer and the brain is the mode by which information is processed within each system. Computer tasks are performed largely in serial steps. This can be seen by the way engineers program computers by creating a sequential flow of instructions. For this sequential cascade of operations, high precision is necessary at each step, as errors accumulate and amplify in successive steps. The brain also uses serial steps for information processing. In the tennis return example, information flows from the eye to the brain and then to the spinal cord to control muscle contraction in the legs, trunk, arms, and wrist.
But the brain also employs massively parallel processing, taking advantage of the large number of neurons and large number of connections each neuron makes. For instance, the moving tennis ball activates many cells in the retina called photoreceptors, whose job is to convert light into electrical signals. These signals are then transmitted to many different kinds of neurons in the retina in parallel. By the time signals originating in the photoreceptor cells have passed through two to three synaptic connections in the retina, information regarding the location, direction, and speed of the ball has been extracted by parallel neuronal circuits and is transmitted in parallel to the brain. Likewise, the motor cortex (part of the cerebral cortex that is responsible for volitional motor control) sends commands in parallel to control muscle contraction in the legs, the trunk, the arms, and the wrist, such that the body and the arms are simultaneously well positioned to receiving the incoming ball.
This massively parallel strategy is possible because each neuron collects inputs from and sends output to many other neurons—on the order of 1,000 on average for both input and output for a mammalian neuron. (By contrast, each transistor has only three nodes for input and output all together.) Information from a single neuron can be delivered to many parallel downstream pathways. At the same time, many neurons that process the same information can pool their inputs to the same downstream neuron. This latter property is particularly useful for enhancing the precision of information processing. For example, information represented by an individual neuron may be noisy (say, with a precision of 1 in 100). By taking the average of input from 100 neurons carrying the same information, the common downstream partner neuron can represent the information with much higher precision (about 1 in 1,000 in this case).6
The computer and the brain also have similarities and differences in the signaling mode of their elementary units. The transistor employs digital signaling, which uses discrete values (0s and 1s) to represent information. The spike in neuronal axons is also a digital signal since the neuron either fires or does not fire a spike at any given time, and when it fires, all spikes are approximately the same size and shape; this property contributes to reliable long-distance spike propagation. However, neurons also utilize analog signaling, which uses continuous values to represent information. Some neurons (like most neurons in our retina) are nonspiking, and their output is transmitted by graded electrical signals (which, unlike spikes, can vary continuously in size) that can transmit more information than can spikes. The receiving end of neurons (reception typically occurs in the dendrites) also uses analog signaling to integrate up to thousands of inputs, enabling the dendrites to perform complex computations.7
Your brain is 10 million times slower than a computer.
Another salient property of the brain, which is clearly at play in the return of service example from tennis, is that the connection strengths between neurons can be modified in response to activity and experience—a process that is widely believed by neuroscientists to be the basis for learning and memory. Repetitive training enables the neuronal circuits to become better configured for the tasks being performed, resulting in greatly improved speed and precision.
Over the past decades, engineers have taken inspiration from the brain to improve computer design. The principles of parallel processing and use-dependent modification of connection strength have both been incorporated into modern computers. For example, increased parallelism, such as the use of multiple processors (cores) in a single computer, is a current trend in computer design. As another example, “deep learning” in the discipline of machine learning and artificial intelligence, which has enjoyed great success in recent years and accounts for rapid advances in object and speech recognition in computers and mobile devices, was inspired by findings of the mammalian visual system.8 As in the mammalian visual system, deep learning employs multiple layers to represent increasingly abstract features (e.g., of visual object or speech), and the weights of connections between different layers are adjusted through learning rather than designed by engineers. These recent advances have expanded the repertoire of tasks the computer is capable of performing. Still, the brain has superior flexibility, generalizability, and learning capability than the state-of-the-art computer. As neuroscientists uncover more secrets about the brain (increasingly aided by the use of computers), engineers can take more inspiration from the working of the brain to further improve the architecture and performance of computers. Whichever emerges as the winner for particular tasks, these interdisciplinary cross-fertilizations will undoubtedly advance both neuroscience and computer engineering.
Liqun Luo is a professor in the School of Humanities and Sciences, and professor, by courtesy, of neurobiology, at Stanford University.
The author wishes to thank Ethan Richman and Jing Xiong for critiques and David Linden for expert editing.
By Liqun Luo, as published in Think Tank: Forty Scientists Explore the Biological Roots of Human Experience, edited by David J. Linden, and published by Yale University Press. Subscribe to Nautilus.
Footnotes
1. This essay was adapted from a section in the introductory chapter of Luo, L. Principles of Neurobiology (Garland Science, New York, NY, 2015), with permission.
2. von Neumann, J. The Computer and the Brain (Yale University Press, New Haven, CT, 2012), 3rd ed.
3. Patterson, D.A. & Hennessy, J.L. Computer Organization and Design (Elsevier, Amsterdam, 2012), 4th ed.
4. The assumption here is that arithmetic operations must convert inputs into outputs, so the speed is limited by basic operations of neuronal communication such as action potentials and synaptic transmission. There are exceptions to these limitations. For example, nonspiking neurons with electrical synapses (connections between neurons without the use of chemical neurotransmitters) can in principle transmit information faster than the approximately one millisecond limit; so can events occurring locally in dendrites.
5. Noise can reflect the fact that many neurobiological processes, such as neurotransmitter release, are probabilistic. For example, the same neuron may not produce identical spike patterns in response to identical stimuli in repeated trials.
6. Suppose that the standard deviation of mean (σmean) for each input approximates noise (it reflects how wide the distribution is, in the same unit as the mean). For the average of n independent inputs, the expected standard deviation of means is σmean = σ / √•n. In our example, σ = 0.01, and n = 100; thus σmean = 0.001.
7. For example, dendrites can act as coincidence detectors to sum near synchronous excitatory input from many different upstream neurons. They can also subtract inhibitory input from excitatory input. The presence of voltage-gated ion channels in certain dendrites enables them to exhibit “nonlinear” properties, such as amplification of electrical signals beyond simple addition.
8. LeCun, Y. Bengio, Y., & Hinton, G. Deep learning. Nature 521, 436–444 (2015).
Top Photographers
Producing creative, fresh projects is the key to standing out.voluptatem Unique side projects are the best place to innovate, but balancing commercially and creatively lucrative work is tricky. So, this article looks at how to make side projects work and why they’re worthwhile, drawing on lessons learned from our development of the ux ompanion app.
Explore the World
On her way she met a copy. The copy warned the Little Blind Text, that where it came from it would have been rewritten a thousand times and everything that was left from its origin would be the word “and” and the Little Blind Text should turn around and return to its own, safe country.
NEWS IDEAS
On her way she met a copy. The copy warned the Little Blind Text, that where it came from it would have been rewritten a thousand times and everything that was left from its origin would be the word “and” and the Little Blind Text should turn around and return to its own, safe country.
Vivamus sagittis lacus vel augue laoreet rutrum faucibus dolor auctor. Morbi leo risus, porta ac consectetur ac, vestibulum at eros. Curabitur blandit tempus porttitor. Sed posuere consectetur est at lobortis. Donec id elit non mi porta gravida at eget metus. Nullam id dolor id nibh ultricies vehicula ut id elit.Donec ullamcorper nulla non metus auctor fringilla. Praesent commodo cursus magna, vel scelerisque nisl consectetur et.
Create Blog Layout
As Vintage decided to have a closer look into fast-paced New York web design realm in person, we get to acquaint with most diverse and exceptionally captivating personalities.
On her way she met a copy. The copy warned the Little Blind Text, that where it came from it would have been rewritten a thousand times and everything that was left from its origin would be the word “and” and the Little Blind Text should turn around and return to its own, safe country.
Explore the World
On her way she met a copy. The copy warned the Little Blind Text, that where it came from it would have been rewritten a thousand times and everything that was left from its origin would be the word “and” and the Little Blind Text should turn around and return to its own, safe country.
LANDING PAGES CHEAT SHEET
Whats the objective of the landing page? By defining this you can can be clear and concise with what you want and need to say. Is the objective opt-ins, sales or downloads? Decide and keep to one objective per landing page.Define you customers needs. Why do they need to take action on this landing page? Define their problem, pain or desire. Once this is clear the language you use will almost start to write its self.Write a headline which will hit the customer right in the pain point. The headline needs to attention grabbing. It either needs to pose a question, answer one or exclaim a desirable situation. This is the time to brag. People may only look at the headline briefly before crossing the page off. Make it count.Create enticing sale copy that complements the headline. If a customer gets past the headline you need to ensure the sale copy is on point. Do this by packing it full of the benefits of taking action with the landing page. What problem will it solve? What can they achieve?Make the call to action button clear and visible. Whether this is a opt-in, buy now or download button this should be big, bold and clear. The customer should not need to go look for this button. Using landing page web software will ensure that the landing page design is optimized for conversions.Make it visual. Adding relevant photos that visually back up the benefits will only assisting in persuading the customers. Likewise, adding video give you the opportunity to impart more information far more quickly than in text form. Keeping customers on your landing page for as long as possible will increase conversions.Make the form relevant. Next to the big call to action button you will be requesting certain information. Do not ask for information that is unnecessary. The more you ask for the less likely a person is to click the big button and convert for you. So if you merely list building do not ask for phone numbers etc. Make sure the customer believe the inputting of the information is essential.Remove all links. Having any links that may take people away from this landing page regardless of the reason is a big NO. No links to social media or homepage or testimonials.Once people move away, will they return? High unlikely.Give something away for Free. Giving something for free will always increase opt-ins.Test, test, test. Not all landing pages will work on any given audience. Change the copy, target different pain points. Experiment with video. Use different colours. The options are endless. In time you will perfect the page for the intended audience. Always test every campaign using split testing and jump on the landing page that converts the best.
Webpage Speed Test Tool
Page Speed Score
Page Code Analysis
Page Optimization Suggestions