The following information is used for educational purposes only.
Volver a la cultura del trabajo
La escandalosa cantidad de pensiones por invalidez que se pagan en algunas provincias es fiel reflejo del auge clientelista y de un oscuro asistencialismo
27 DE OCTUBRE DE 2016
La información sobre serias irregularidades en el otorgamiento de pensiones por invalidez en distintas provincias del país, especialmente en las del Norte, muestra claramente el daño que el populismo y el clientelismo político pueden provocarle a la sociedad, particularmente en distritos cuasi feudales como Santiago del Estero, Formosa o el Chaco.
No es casual que provincias con una oligarquía que aprovecha la debilidad de las instituciones para perpetuarse en el poder, como Santiago del Estero y Formosa, disputen con el Chaco los primeros lugares en los rankings de pobreza y los últimos en calidad educativa o infraestructura sanitaria. Coincidentemente, se trata de provincias en las cuales la creciente penetración del narcotráfico las ha convertido, además, en pistas de aterrizaje de aviones cargados con drogas.
Datos oficiales del Ministerio de Desarrollo Social de la Nación, responsable de tramitar los subsidios, reportan que en la Argentina se pagan actualmente más de un millón de pensiones no contributivas por invalidez y que éstas alcanzan, en promedio, al 6,9% de la población económicamente activa (PEA), aunque este porcentaje se eleva escandalosamente en algunas provincias. Lamentablemente, el organismo nacional no lleva registros de evolución en el tiempo de estas asignaciones. Sí reconoce que urge sanear las condiciones exigidas, al igual que los criterios de evaluación y asignación, y hacer una completa revisión de casos.
Cabe destacar que en la ciudad de Buenos Aires las pensiones por invalidez sólo alcanzan al 1% del total de la población, en tanto que en provincias como Buenos Aires y Santa Fe apenas superan el 2%.
Y no puede menos que sorprender que el 35% de la población económicamente activa (11% del total de habitantes) de la provincia de Santiago del Estero tenga una pensión por invalidez. O que en Formosa alcancen casi al 32% (10% del total de la población) y en el Chaco, a cerca del 31% (9,8% de todos los habitantes). El correlato de estos guarismos se da en las tasas de inactividad laboral: el 36% de los santiagueños, el 43% de los formoseños y el 45% de los chaqueños no buscan emplearse.
Violando la normativa al respecto, hay familias completas en las que varios de sus integrantes gozan de estas pensiones desde hace más de una década. Hemos de reconocer dolorosamente que muchos niños y jóvenes nunca vieron a sus padres salir a trabajar. La cultura del subsidio ha reemplazado la del trabajo y nadie desea perderlo con la aceptación de un contrato o empleo. La informalidad de una changa ha pasado a satisfacer tristemente cualquier expectativa, los atrapa y prácticamente destierra los sueños de movilidad social.
Tanto el ex gobernador del Chaco y ex jefe de Gabinete nacional Jorge Capitanich como el actual mandatario provincial, Domingo Peppo, se defendían en 2015 ante los casos de mortalidad infantil en la provincia jactándose descaradamente de haber bajado los índices de pobreza del 42% al 8% en los últimos ocho años.
Reiteradamente hemos denunciado que las oligarquías provinciales de régimen feudal necesitan pobres y gente claramente dependiente de sueldos estatales, subsidios o prebendas que aseguren su fidelidad en los comicios para lograr perpetrarse en el poder y seguir alimentando tanto el clientelismo como, en algunas ocasiones, un marcado nepotismo. De más está señalar que la discrecionalidad en las asignaciones afecta a quienes realmente necesitan de ayuda, pues, en muchos casos, nunca verán los fondos que las conveniencias políticas desvían con otras intenciones. Es lamentable la escasa reacción de la dirigencia política ante estas cuestiones, que tanto afectan el porvenir de las nuevas generaciones.
Cambiar sería terminar con estas rémoras de subsidios mal habidos y profundizar las investigaciones para echar luz y juzgar debidamente a los funcionarios y dirigentes que puedan propiciar estas perniciosas irregularidades.
Han pasado muchos años de sistemáticos ataques a la sana cultura del trabajo, reemplazada por un asistencialismo administrado discrecionalmente que sólo conduce al fracaso de cualquier sociedad. Urge, entonces, trabajar para recuperar esta herramienta vital para la dignidad de las personas y el desarrollo de los pueblos.
Fuente: www.lanacion.com.ar
Thursday, October 27, 2016
Sunday, October 23, 2016
TECH/GralInt-TED talks-Zeynep Tufekci: Machine intelligence makes human morals more important
The following information is used for educational purposes only.
Filmed June 2016 at TEDSummit
Zeynep Tufekci: Machine intelligence makes human morals more important
Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns — and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics."
Transcript:
So, I started my first job as a computer programmer in my very first year of college -- basically, as a teenager.
Soon after I started working, writing software in a company, a manager who worked at the company came down to where I was, and he whispered to me, "Can he tell if I'm lying?" There was nobody else in the room.
"Can who tell if you're lying? And why are we whispering?"
The manager pointed at the computer in the room. "Can he tell if I'm lying?" Well, that manager was having an affair with the receptionist.
(Laughter)
And I was still a teenager. So I whisper-shouted back to him, "Yes, the computer can tell if you're lying."
(Laughter)
Well, I laughed, but actually, the laugh's on me. Nowadays, there are computational systems that can suss out emotional states and even lying from processing human faces. Advertisers and even governments are very interested.
I had become a computer programmer because I was one of those kids crazy about math and science. But somewhere along the line I'd learned about nuclear weapons, and I'd gotten really concerned with the ethics of science. I was troubled. However, because of family circumstances, I also needed to start working as soon as possible. So I thought to myself, hey, let me pick a technical field where I can get a job easily and where I don't have to deal with any troublesome questions of ethics. So I picked computers.
(Laughter)
Well, ha, ha, ha! All the laughs are on me. Nowadays, computer scientists are building platforms that control what a billion people see every day. They're developing cars that could decide who to run over. They're even building machines, weapons, that might kill human beings in war. It's ethics all the way down.
Machine intelligence is here. We're now using computation to make all sort of decisions, but also new kinds of decisions. We're asking questions to computation that have no single right answers, that are subjective and open-ended and value-laden.
We're asking questions like, "Who should the company hire?" "Which update from which friend should you be shown?" "Which convict is more likely to reoffend?" "Which news item or movie should be recommended to people?"
Look, yes, we've been using computers for a while, but this is different. This is a historical twist, because we cannot anchor computation for such subjective decisions the way we can anchor computation for flying airplanes, building bridges, going to the moon. Are airplanes safer? Did the bridge sway and fall? There, we have agreed-upon, fairly clear benchmarks, and we have laws of nature to guide us. We have no such anchors and benchmarks for decisions in messy human affairs.
To make things more complicated, our software is getting more powerful, but it's also getting less transparent and more complex. Recently, in the past decade, complex algorithms have made great strides. They can recognize human faces. They can decipher handwriting. They can detect credit card fraud and block spam and they can translate between languages. They can detect tumors in medical imaging. They can beat humans in chess and Go.
Much of this progress comes from a method called "machine learning." Machine learning is different than traditional programming, where you give the computer detailed, exact, painstaking instructions. It's more like you take the system and you feed it lots of data, including unstructured data, like the kind we generate in our digital lives. And the system learns by churning through this data. And also, crucially, these systems don't operate under a single-answer logic. They don't produce a simple answer; it's more probabilistic: "This one is probably more like what you're looking for."
Now, the upside is: this method is really powerful. The head of Google's AI systems called it, "the unreasonable effectiveness of data." The downside is, we don't really understand what the system learned. In fact, that's its power. This is less like giving instructions to a computer; it's more like training a puppy-machine-creature we don't really understand or control. So this is our problem. It's a problem when this artificial intelligence system gets things wrong. It's also a problem when it gets things right, because we don't even know which is which when it's a subjective problem. We don't know what this thing is thinking.
So, consider a hiring algorithm -- a system used to hire people, using machine-learning systems. Such a system would have been trained on previous employees' data and instructed to find and hire people like the existing high performers in the company. Sounds good. I once attended a conference that brought together human resources managers and executives, high-level people, using such systems in hiring. They were super excited. They thought that this would make hiring more objective, less biased, and give women and minorities a better shot against biased human managers.
And look -- human hiring is biased. I know. I mean, in one of my early jobs as a programmer, my immediate manager would sometimes come down to where I was really early in the morning or really late in the afternoon, and she'd say, "Zeynep, let's go to lunch!" I'd be puzzled by the weird timing. It's 4pm. Lunch? I was broke, so free lunch. I always went. I later realized what was happening. My immediate managers had not confessed to their higher-ups that the programmer they hired for a serious job was a teen girl who wore jeans and sneakers to work. I was doing a good job, I just looked wrong and was the wrong age and gender.
So hiring in a gender- and race-blind way certainly sounds good to me. But with these systems, it is more complicated, and here's why: Currently, computational systems can infer all sorts of things about you from your digital crumbs, even if you have not disclosed those things. They can infer your sexual orientation, your personality traits, your political leanings. They have predictive power with high levels of accuracy. Remember -- for things you haven't even disclosed. This is inference.
I have a friend who developed such computational systems to predict the likelihood of clinical or postpartum depression from social media data. The results are impressive. Her system can predict the likelihood of depression months before the onset of any symptoms -- months before. No symptoms, there's prediction. She hopes it will be used for early intervention. Great! But now put this in the context of hiring.
So at this human resources managers conference, I approached a high-level manager in a very large company, and I said to her, "Look, what if, unbeknownst to you, your system is weeding out people with high future likelihood of depression? They're not depressed now, just maybe in the future, more likely. What if it's weeding out women more likely to be pregnant in the next year or two but aren't pregnant now? What if it's hiring aggressive people because that's your workplace culture?" You can't tell this by looking at gender breakdowns. Those may be balanced. And since this is machine learning, not traditional coding, there is no variable there labeled "higher risk of depression," "higher risk of pregnancy," "aggressive guy scale." Not only do you not know what your system is selecting on, you don't even know where to begin to look. It's a black box. It has predictive power, but you don't understand it.
"What safeguards," I asked, "do you have to make sure that your black box isn't doing something shady?" She looked at me as if I had just stepped on 10 puppy tails.
(Laughter)
She stared at me and she said, "I don't want to hear another word about this." And she turned around and walked away. Mind you -- she wasn't rude. It was clearly: what I don't know isn't my problem, go away, death stare.
(Laughter)
Look, such a system may even be less biased than human managers in some ways. And it could make monetary sense. But it could also lead to a steady but stealthy shutting out of the job market of people with higher risk of depression. Is this the kind of society we want to build, without even knowing we've done this, because we turned decision-making to machines we don't totally understand?
Another problem is this: these systems are often trained on data generated by our actions, human imprints. Well, they could just be reflecting our biases, and these systems could be picking up on our biases and amplifying them and showing them back to us, while we're telling ourselves, "We're just doing objective, neutral computation."
Researchers found that on Google, women are less likely than men to be shown job ads for high-paying jobs. And searching for African-American names is more likely to bring up ads suggesting criminal history, even when there is none. Such hidden biases and black-box algorithms that researchers uncover sometimes but sometimes we don't know, can have life-altering consequences.
In Wisconsin, a defendant was sentenced to six years in prison for evading the police. You may not know this, but algorithms are increasingly used in parole and sentencing decisions. He wanted to know: How is this score calculated? It's a commercial black box. The company refused to have its algorithm be challenged in open court. But ProPublica, an investigative nonprofit, audited that very algorithm with what public data they could find, and found that its outcomes were biased and its predictive power was dismal, barely better than chance, and it was wrongly labeling black defendants as future criminals at twice the rate of white defendants.
So, consider this case: This woman was late picking up her godsister from a school in Broward County, Florida, running down the street with a friend of hers. They spotted an unlocked kid's bike and a scooter on a porch and foolishly jumped on it. As they were speeding off, a woman came out and said, "Hey! That's my kid's bike!" They dropped it, they walked away, but they were arrested.
She was wrong, she was foolish, but she was also just 18. She had a couple of juvenile misdemeanors. Meanwhile, that man had been arrested for shoplifting in Home Depot -- 85 dollars' worth of stuff, a similar petty crime. But he had two prior armed robbery convictions. But the algorithm scored her as high risk, and not him. Two years later, ProPublica found that she had not reoffended. It was just hard to get a job for her with her record. He, on the other hand, did reoffend and is now serving an eight-year prison term for a later crime. Clearly, we need to audit our black boxes and not have them have this kind of unchecked power.
(Applause)
Audits are great and important, but they don't solve all our problems. Take Facebook's powerful news feed algorithm -- you know, the one that ranks everything and decides what to show you from all the friends and pages you follow. Should you be shown another baby picture?
(Laughter)
A sullen note from an acquaintance? An important but difficult news item? There's no right answer. Facebook optimizes for engagement on the site: likes, shares, comments.
In August of 2014, protests broke out in Ferguson, Missouri, after the killing of an African-American teenager by a white police officer, under murky circumstances. The news of the protests was all over my algorithmically unfiltered Twitter feed, but nowhere on my Facebook. Was it my Facebook friends? I disabled Facebook's algorithm, which is hard because Facebook keeps wanting to make you come under the algorithm's control, and saw that my friends were talking about it. It's just that the algorithm wasn't showing it to me. I researched this and found this was a widespread problem.
The story of Ferguson wasn't algorithm-friendly. It's not "likable." Who's going to click on "like?" It's not even easy to comment on. Without likes and comments, the algorithm was likely showing it to even fewer people, so we didn't get to see this. Instead, that week, Facebook's algorithm highlighted this, which is the ALS Ice Bucket Challenge. Worthy cause; dump ice water, donate to charity, fine. But it was super algorithm-friendly. The machine made this decision for us. A very important but difficult conversation might have been smothered, had Facebook been the only channel.
Now, finally, these systems can also be wrong in ways that don't resemble human systems. Do you guys remember Watson, IBM's machine-intelligence system that wiped the floor with human contestants on Jeopardy? It was a great player. But then, for Final Jeopardy, Watson was asked this question: "Its largest airport is named for a World War II hero, its second-largest for a World War II battle."
(Hums Final Jeopardy music)
Chicago. The two humans got it right. Watson, on the other hand, answered "Toronto" -- for a US city category! The impressive system also made an error that a human would never make, a second-grader wouldn't make.
Our machine intelligence can fail in ways that don't fit error patterns of humans, in ways we won't expect and be prepared for. It'd be lousy not to get a job one is qualified for, but it would triple suck if it was because of stack overflow in some subroutine.
(Laughter)
In May of 2010, a flash crash on Wall Street fueled by a feedback loop in Wall Street's "sell" algorithm wiped a trillion dollars of value in 36 minutes. I don't even want to think what "error" means in the context of lethal autonomous weapons.
So yes, humans have always made biases. Decision makers and gatekeepers, in courts, in news, in war ... they make mistakes; but that's exactly my point. We cannot escape these difficult questions. We cannot outsource our responsibilities to machines.
(Applause)
Artificial intelligence does not give us a "Get out of ethics free" card.
Data scientist Fred Benenson calls this math-washing. We need the opposite. We need to cultivate algorithm suspicion, scrutiny and investigation. We need to make sure we have algorithmic accountability, auditing and meaningful transparency. We need to accept that bringing math and computation to messy, value-laden human affairs does not bring objectivity; rather, the complexity of human affairs invades the algorithms. Yes, we can and we should use computation to help us make better decisions. But we have to own up to our moral responsibility to judgment, and use algorithms within that framework, not as a means to abdicate and outsource our responsibilities to one another as human to human.
Machine intelligence is here. That means we must hold on ever tighter to human values and human ethics.
Thank you.
(Applause)
Filmed June 2016 at TEDSummit
Zeynep Tufekci: Machine intelligence makes human morals more important
Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns — and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics."
Transcript:
So, I started my first job as a computer programmer in my very first year of college -- basically, as a teenager.
Soon after I started working, writing software in a company, a manager who worked at the company came down to where I was, and he whispered to me, "Can he tell if I'm lying?" There was nobody else in the room.
"Can who tell if you're lying? And why are we whispering?"
The manager pointed at the computer in the room. "Can he tell if I'm lying?" Well, that manager was having an affair with the receptionist.
(Laughter)
And I was still a teenager. So I whisper-shouted back to him, "Yes, the computer can tell if you're lying."
(Laughter)
Well, I laughed, but actually, the laugh's on me. Nowadays, there are computational systems that can suss out emotional states and even lying from processing human faces. Advertisers and even governments are very interested.
I had become a computer programmer because I was one of those kids crazy about math and science. But somewhere along the line I'd learned about nuclear weapons, and I'd gotten really concerned with the ethics of science. I was troubled. However, because of family circumstances, I also needed to start working as soon as possible. So I thought to myself, hey, let me pick a technical field where I can get a job easily and where I don't have to deal with any troublesome questions of ethics. So I picked computers.
(Laughter)
Well, ha, ha, ha! All the laughs are on me. Nowadays, computer scientists are building platforms that control what a billion people see every day. They're developing cars that could decide who to run over. They're even building machines, weapons, that might kill human beings in war. It's ethics all the way down.
Machine intelligence is here. We're now using computation to make all sort of decisions, but also new kinds of decisions. We're asking questions to computation that have no single right answers, that are subjective and open-ended and value-laden.
We're asking questions like, "Who should the company hire?" "Which update from which friend should you be shown?" "Which convict is more likely to reoffend?" "Which news item or movie should be recommended to people?"
Look, yes, we've been using computers for a while, but this is different. This is a historical twist, because we cannot anchor computation for such subjective decisions the way we can anchor computation for flying airplanes, building bridges, going to the moon. Are airplanes safer? Did the bridge sway and fall? There, we have agreed-upon, fairly clear benchmarks, and we have laws of nature to guide us. We have no such anchors and benchmarks for decisions in messy human affairs.
To make things more complicated, our software is getting more powerful, but it's also getting less transparent and more complex. Recently, in the past decade, complex algorithms have made great strides. They can recognize human faces. They can decipher handwriting. They can detect credit card fraud and block spam and they can translate between languages. They can detect tumors in medical imaging. They can beat humans in chess and Go.
Much of this progress comes from a method called "machine learning." Machine learning is different than traditional programming, where you give the computer detailed, exact, painstaking instructions. It's more like you take the system and you feed it lots of data, including unstructured data, like the kind we generate in our digital lives. And the system learns by churning through this data. And also, crucially, these systems don't operate under a single-answer logic. They don't produce a simple answer; it's more probabilistic: "This one is probably more like what you're looking for."
Now, the upside is: this method is really powerful. The head of Google's AI systems called it, "the unreasonable effectiveness of data." The downside is, we don't really understand what the system learned. In fact, that's its power. This is less like giving instructions to a computer; it's more like training a puppy-machine-creature we don't really understand or control. So this is our problem. It's a problem when this artificial intelligence system gets things wrong. It's also a problem when it gets things right, because we don't even know which is which when it's a subjective problem. We don't know what this thing is thinking.
So, consider a hiring algorithm -- a system used to hire people, using machine-learning systems. Such a system would have been trained on previous employees' data and instructed to find and hire people like the existing high performers in the company. Sounds good. I once attended a conference that brought together human resources managers and executives, high-level people, using such systems in hiring. They were super excited. They thought that this would make hiring more objective, less biased, and give women and minorities a better shot against biased human managers.
And look -- human hiring is biased. I know. I mean, in one of my early jobs as a programmer, my immediate manager would sometimes come down to where I was really early in the morning or really late in the afternoon, and she'd say, "Zeynep, let's go to lunch!" I'd be puzzled by the weird timing. It's 4pm. Lunch? I was broke, so free lunch. I always went. I later realized what was happening. My immediate managers had not confessed to their higher-ups that the programmer they hired for a serious job was a teen girl who wore jeans and sneakers to work. I was doing a good job, I just looked wrong and was the wrong age and gender.
So hiring in a gender- and race-blind way certainly sounds good to me. But with these systems, it is more complicated, and here's why: Currently, computational systems can infer all sorts of things about you from your digital crumbs, even if you have not disclosed those things. They can infer your sexual orientation, your personality traits, your political leanings. They have predictive power with high levels of accuracy. Remember -- for things you haven't even disclosed. This is inference.
I have a friend who developed such computational systems to predict the likelihood of clinical or postpartum depression from social media data. The results are impressive. Her system can predict the likelihood of depression months before the onset of any symptoms -- months before. No symptoms, there's prediction. She hopes it will be used for early intervention. Great! But now put this in the context of hiring.
So at this human resources managers conference, I approached a high-level manager in a very large company, and I said to her, "Look, what if, unbeknownst to you, your system is weeding out people with high future likelihood of depression? They're not depressed now, just maybe in the future, more likely. What if it's weeding out women more likely to be pregnant in the next year or two but aren't pregnant now? What if it's hiring aggressive people because that's your workplace culture?" You can't tell this by looking at gender breakdowns. Those may be balanced. And since this is machine learning, not traditional coding, there is no variable there labeled "higher risk of depression," "higher risk of pregnancy," "aggressive guy scale." Not only do you not know what your system is selecting on, you don't even know where to begin to look. It's a black box. It has predictive power, but you don't understand it.
"What safeguards," I asked, "do you have to make sure that your black box isn't doing something shady?" She looked at me as if I had just stepped on 10 puppy tails.
(Laughter)
She stared at me and she said, "I don't want to hear another word about this." And she turned around and walked away. Mind you -- she wasn't rude. It was clearly: what I don't know isn't my problem, go away, death stare.
(Laughter)
Look, such a system may even be less biased than human managers in some ways. And it could make monetary sense. But it could also lead to a steady but stealthy shutting out of the job market of people with higher risk of depression. Is this the kind of society we want to build, without even knowing we've done this, because we turned decision-making to machines we don't totally understand?
Another problem is this: these systems are often trained on data generated by our actions, human imprints. Well, they could just be reflecting our biases, and these systems could be picking up on our biases and amplifying them and showing them back to us, while we're telling ourselves, "We're just doing objective, neutral computation."
Researchers found that on Google, women are less likely than men to be shown job ads for high-paying jobs. And searching for African-American names is more likely to bring up ads suggesting criminal history, even when there is none. Such hidden biases and black-box algorithms that researchers uncover sometimes but sometimes we don't know, can have life-altering consequences.
In Wisconsin, a defendant was sentenced to six years in prison for evading the police. You may not know this, but algorithms are increasingly used in parole and sentencing decisions. He wanted to know: How is this score calculated? It's a commercial black box. The company refused to have its algorithm be challenged in open court. But ProPublica, an investigative nonprofit, audited that very algorithm with what public data they could find, and found that its outcomes were biased and its predictive power was dismal, barely better than chance, and it was wrongly labeling black defendants as future criminals at twice the rate of white defendants.
So, consider this case: This woman was late picking up her godsister from a school in Broward County, Florida, running down the street with a friend of hers. They spotted an unlocked kid's bike and a scooter on a porch and foolishly jumped on it. As they were speeding off, a woman came out and said, "Hey! That's my kid's bike!" They dropped it, they walked away, but they were arrested.
She was wrong, she was foolish, but she was also just 18. She had a couple of juvenile misdemeanors. Meanwhile, that man had been arrested for shoplifting in Home Depot -- 85 dollars' worth of stuff, a similar petty crime. But he had two prior armed robbery convictions. But the algorithm scored her as high risk, and not him. Two years later, ProPublica found that she had not reoffended. It was just hard to get a job for her with her record. He, on the other hand, did reoffend and is now serving an eight-year prison term for a later crime. Clearly, we need to audit our black boxes and not have them have this kind of unchecked power.
(Applause)
Audits are great and important, but they don't solve all our problems. Take Facebook's powerful news feed algorithm -- you know, the one that ranks everything and decides what to show you from all the friends and pages you follow. Should you be shown another baby picture?
(Laughter)
A sullen note from an acquaintance? An important but difficult news item? There's no right answer. Facebook optimizes for engagement on the site: likes, shares, comments.
In August of 2014, protests broke out in Ferguson, Missouri, after the killing of an African-American teenager by a white police officer, under murky circumstances. The news of the protests was all over my algorithmically unfiltered Twitter feed, but nowhere on my Facebook. Was it my Facebook friends? I disabled Facebook's algorithm, which is hard because Facebook keeps wanting to make you come under the algorithm's control, and saw that my friends were talking about it. It's just that the algorithm wasn't showing it to me. I researched this and found this was a widespread problem.
The story of Ferguson wasn't algorithm-friendly. It's not "likable." Who's going to click on "like?" It's not even easy to comment on. Without likes and comments, the algorithm was likely showing it to even fewer people, so we didn't get to see this. Instead, that week, Facebook's algorithm highlighted this, which is the ALS Ice Bucket Challenge. Worthy cause; dump ice water, donate to charity, fine. But it was super algorithm-friendly. The machine made this decision for us. A very important but difficult conversation might have been smothered, had Facebook been the only channel.
Now, finally, these systems can also be wrong in ways that don't resemble human systems. Do you guys remember Watson, IBM's machine-intelligence system that wiped the floor with human contestants on Jeopardy? It was a great player. But then, for Final Jeopardy, Watson was asked this question: "Its largest airport is named for a World War II hero, its second-largest for a World War II battle."
(Hums Final Jeopardy music)
Chicago. The two humans got it right. Watson, on the other hand, answered "Toronto" -- for a US city category! The impressive system also made an error that a human would never make, a second-grader wouldn't make.
Our machine intelligence can fail in ways that don't fit error patterns of humans, in ways we won't expect and be prepared for. It'd be lousy not to get a job one is qualified for, but it would triple suck if it was because of stack overflow in some subroutine.
(Laughter)
In May of 2010, a flash crash on Wall Street fueled by a feedback loop in Wall Street's "sell" algorithm wiped a trillion dollars of value in 36 minutes. I don't even want to think what "error" means in the context of lethal autonomous weapons.
So yes, humans have always made biases. Decision makers and gatekeepers, in courts, in news, in war ... they make mistakes; but that's exactly my point. We cannot escape these difficult questions. We cannot outsource our responsibilities to machines.
(Applause)
Artificial intelligence does not give us a "Get out of ethics free" card.
Data scientist Fred Benenson calls this math-washing. We need the opposite. We need to cultivate algorithm suspicion, scrutiny and investigation. We need to make sure we have algorithmic accountability, auditing and meaningful transparency. We need to accept that bringing math and computation to messy, value-laden human affairs does not bring objectivity; rather, the complexity of human affairs invades the algorithms. Yes, we can and we should use computation to help us make better decisions. But we have to own up to our moral responsibility to judgment, and use algorithms within that framework, not as a means to abdicate and outsource our responsibilities to one another as human to human.
Machine intelligence is here. That means we must hold on ever tighter to human values and human ethics.
Thank you.
(Applause)
POL/SOC/GralInt-TED Talks-Philippa Neave: The unexpected challenges of a country's first election
The following information is used for educational purposes only.
Filmed September 2016 at TEDNYC
Philippa Neave: The unexpected challenges of a country's first election
How do you teach an entire country how to vote when no one has done it before? It's a huge challenge facing fledgling democracies around the world — and one of the biggest problems turns out to be a lack of shared language. After all, if you can't describe something, you probably can't understand it. In this eye-opening talk, election expert Philippa Neave shares her experiences from the front lines of democracy — and her solution to this unique language gap.
Transcript:
The great philosopher Aristotle said if something doesn't exist, there's no word for it, and if there's no word for something, that something doesn't exist. So when we talk about elections, we in established democracies, we know what we're talking about. We've got the words. We have the vocabulary. We know what a polling station is. We know what a ballot paper is. But what about countries where democracy doesn't exist, countries where there are no words to describe the concepts that underpin a democratic society?
I work in the field of electoral assistance, so that's to say we assist emerging democracies to organize what is often their first elections. When people ask me what I do, quite often I get this answer. "Oh, so you're one of these people who goes around the world imposing Western democracy on countries that can't handle it." Well, the United Nations does not impose anything on anybody. It really doesn't, and also, what we do is firmly anchored in the 1948 Universal Declaration of Human Rights, Article 21, that says that everybody should have the right to choose who governs them.
So that's the basis of the work. I specialize in public outreach. What does that mean? Another jargon. It actually means designing information campaigns so that candidates and voters who have never had the opportunity to participate or to vote understand where, when, how to register; where, when, how to vote; why, why it is important to take part. So I'll probably devise a specific campaign to reach out to women to make sure that they can take part, that they can be part of the process. Young people as well. All sorts of people. Handicapped people. We try to reach everybody.
And it's not always easy, because very often in this work, I've noticed now over the years that I've been doing it that words are lacking, and so what do you do?
Afghanistan. It's a country with high levels of illiteracy, and the thing about that was, it was in 2005, and we organized two elections on the same day. The reason was because the logistics are so incredibly difficult, it seemed to be more efficient to do that. It was, but on the other hand, explaining two elections instead of one was even more complicated. So we used a lot of images, and when it came to the actual ballot, we had problems, because so many people wanted to take part, we had 300 candidates for 52 seats in the Wolesi Jirga, which is the parliamentary elections. And for the Provincial Council, we had even more candidates. We had 330 for 54 seats. So talking about ballot design, this is what the ballot looked like. It's the size of a newspaper. This was the Wolesi Jirga ballot -- (Laughter) Yeah, and -- this was the Provincial Council ballot. Even more. So you see, we did use a lot of symbols and things like that.
And we had other problems in Southern Sudan. Southern Sudan was a very different story. We had so many people who had never, of course, voted, but we had extremely, extremely high levels of illiteracy, very, very poor infrastructure. For example -- I mean, it's a country the size of Texas, more or less. We had seven kilometers of paved roads, seven kilometers in the whole country, and that includes the tarmac where we landed the planes in Juba Airport. So transporting electoral materials, etc., is exceedingly difficult. People had no idea about what a box looked like. It was very complicated, so using verbal communication was obviously the way to go, but there were 132 languages. So that was extremely challenging.
Then I arrived in Tunisia in 2011. It was the Arab Spring. A huge amount of hope was generated by that enormous movement that was going on in the region. There was Libya, there was Egypt, there was Yemen. It was an enormous, enormous historical moment. And I was sitting with the election commission, and we were talking about various aspects of the election, and I was hearing them using words that I hadn't actually heard before, and I'd worked with Iraqis, I'd worked with Jordanians, Egyptians, and suddenly they were using these words, and I just thought, "This is strange." And what really gave rise to it was this word "observer." We were discussing election observers, and the election commissioner was talking about "mulahiz" in Arabic. This means "to notice" in a passive sort of sense, as in, "I noticed he was wearing a light blue shirt." Did I go and check whether the shirt was light blue or not? That is the role of an election observer. It's very active, it's governed by all kinds of treaties, and it has got that control function in it. And then I got wind of the fact that in Egypt, they were using this term "mutabi’," which means "to follow." So we were now having followers of an election. So that's not quite right either, because there is a term that's already accepted and in use, which was the word "muraqib" which means "a controller." It's got that notion of control. So I thought, three words for one concept. This is not good. And with our colleagues, we thought perhaps it's our role to actually help make sure that the words are understood and actually create a work of reference that could be used across the Arab region.
And that's what we did. So together with these colleagues, we launched the "Arabic Lexicon of Electoral Terminology," and we worked in eight different countries. It meant actually defining 481 terms which formed the basis of everything you need to know if you're going to organize a democratic election. And we defined these terms, and we worked with the Arab colleagues and came to an agreement about what would be the appropriate word to use in Arabic. Because the Arabic language is very rich, and that's part of the problem. But there are 22 countries that speak Arabic, and they use modern standard Arabic, which is the Arabic that is used across the whole region in newspapers and broadcasts, but of course, then from one country to the next in day to day language and use it varies -- dialect, colloquialisms, etc. So that was another added layer of complication. So in one sense you had the problem that language wasn't fully ripe, if you like, neologisms were coming up, new expressions.
And so we defined all these terms, and then we had eight correspondents in the region. We submitted the draft to them, they responded back to us. "Yes, we understand the definition. We agree with it, but this is what we say in our country." Because we were not going to harmonize or force harmonization. We were trying to facilitate understanding among people. So in yellow, you see the different expressions in use in the various countries.
So this, I'm happy to say, it took three years to produce this because we also finalized the draft and took it actually into the field, sat with the election commissions in all these different countries, debated and defined and refined the draft, and finally published it in November 2014 in Cairo. And it's gone a long way. We published 10,000 copies. To date, there's about 3,000 downloads off the internet in PDF form. I heard just recently from a colleague that they've taken it up in Somalia. They're going to produce a version of this in Somalia, because there's nothing in Somalia at all. So that's very good to know. And this newly formed Arab Organization for Electoral Management Bodies, which is trying to professionalize how elections are run in the region, they're using it as well. And the Arab League have now built up a pan-Arab observation unit, and they're using it. So that's all really good.
However, this work of reference is quite high-pitched. It's complex, and a lot of the terms are quite technical, so the average person probably doesn't need to know at least a third of it. But the people of the Middle East have been deprived of any form of what we know as civic education. It's part of our curriculum at school. It doesn't really exist in that part of the world, and I feel it's really the right of everybody to know how these things work. And it's a good thing to think about producing a work of reference for the average person, and bearing in mind that now we have a basis to work with, but also we have technology, so we can reach out using telephone apps, video, animation. There's all sorts of tools that can be used now to communicate these ideas to people for the first time in their own language.
We hear a lot of misery about the Middle East. We hear the chaos of war. We hear terrorism. We hear about sectarianism and all this horrible negative news that comes to us all the time. What we're not hearing is what are the people, the everyday people, thinking? What are they aspiring to? Let's give them the means, let's give them the words. The silent majority is silent because they don't have the words. The silent majority needs to know. It is time to provide people with the knowledge tools that they can inform themselves with.
The silent majority does not need to be silent. Let's help them have a voice.
Thank you very much.
(Applause)
Filmed September 2016 at TEDNYC
Philippa Neave: The unexpected challenges of a country's first election
How do you teach an entire country how to vote when no one has done it before? It's a huge challenge facing fledgling democracies around the world — and one of the biggest problems turns out to be a lack of shared language. After all, if you can't describe something, you probably can't understand it. In this eye-opening talk, election expert Philippa Neave shares her experiences from the front lines of democracy — and her solution to this unique language gap.
Transcript:
The great philosopher Aristotle said if something doesn't exist, there's no word for it, and if there's no word for something, that something doesn't exist. So when we talk about elections, we in established democracies, we know what we're talking about. We've got the words. We have the vocabulary. We know what a polling station is. We know what a ballot paper is. But what about countries where democracy doesn't exist, countries where there are no words to describe the concepts that underpin a democratic society?
I work in the field of electoral assistance, so that's to say we assist emerging democracies to organize what is often their first elections. When people ask me what I do, quite often I get this answer. "Oh, so you're one of these people who goes around the world imposing Western democracy on countries that can't handle it." Well, the United Nations does not impose anything on anybody. It really doesn't, and also, what we do is firmly anchored in the 1948 Universal Declaration of Human Rights, Article 21, that says that everybody should have the right to choose who governs them.
So that's the basis of the work. I specialize in public outreach. What does that mean? Another jargon. It actually means designing information campaigns so that candidates and voters who have never had the opportunity to participate or to vote understand where, when, how to register; where, when, how to vote; why, why it is important to take part. So I'll probably devise a specific campaign to reach out to women to make sure that they can take part, that they can be part of the process. Young people as well. All sorts of people. Handicapped people. We try to reach everybody.
And it's not always easy, because very often in this work, I've noticed now over the years that I've been doing it that words are lacking, and so what do you do?
Afghanistan. It's a country with high levels of illiteracy, and the thing about that was, it was in 2005, and we organized two elections on the same day. The reason was because the logistics are so incredibly difficult, it seemed to be more efficient to do that. It was, but on the other hand, explaining two elections instead of one was even more complicated. So we used a lot of images, and when it came to the actual ballot, we had problems, because so many people wanted to take part, we had 300 candidates for 52 seats in the Wolesi Jirga, which is the parliamentary elections. And for the Provincial Council, we had even more candidates. We had 330 for 54 seats. So talking about ballot design, this is what the ballot looked like. It's the size of a newspaper. This was the Wolesi Jirga ballot -- (Laughter) Yeah, and -- this was the Provincial Council ballot. Even more. So you see, we did use a lot of symbols and things like that.
And we had other problems in Southern Sudan. Southern Sudan was a very different story. We had so many people who had never, of course, voted, but we had extremely, extremely high levels of illiteracy, very, very poor infrastructure. For example -- I mean, it's a country the size of Texas, more or less. We had seven kilometers of paved roads, seven kilometers in the whole country, and that includes the tarmac where we landed the planes in Juba Airport. So transporting electoral materials, etc., is exceedingly difficult. People had no idea about what a box looked like. It was very complicated, so using verbal communication was obviously the way to go, but there were 132 languages. So that was extremely challenging.
Then I arrived in Tunisia in 2011. It was the Arab Spring. A huge amount of hope was generated by that enormous movement that was going on in the region. There was Libya, there was Egypt, there was Yemen. It was an enormous, enormous historical moment. And I was sitting with the election commission, and we were talking about various aspects of the election, and I was hearing them using words that I hadn't actually heard before, and I'd worked with Iraqis, I'd worked with Jordanians, Egyptians, and suddenly they were using these words, and I just thought, "This is strange." And what really gave rise to it was this word "observer." We were discussing election observers, and the election commissioner was talking about "mulahiz" in Arabic. This means "to notice" in a passive sort of sense, as in, "I noticed he was wearing a light blue shirt." Did I go and check whether the shirt was light blue or not? That is the role of an election observer. It's very active, it's governed by all kinds of treaties, and it has got that control function in it. And then I got wind of the fact that in Egypt, they were using this term "mutabi’," which means "to follow." So we were now having followers of an election. So that's not quite right either, because there is a term that's already accepted and in use, which was the word "muraqib" which means "a controller." It's got that notion of control. So I thought, three words for one concept. This is not good. And with our colleagues, we thought perhaps it's our role to actually help make sure that the words are understood and actually create a work of reference that could be used across the Arab region.
And that's what we did. So together with these colleagues, we launched the "Arabic Lexicon of Electoral Terminology," and we worked in eight different countries. It meant actually defining 481 terms which formed the basis of everything you need to know if you're going to organize a democratic election. And we defined these terms, and we worked with the Arab colleagues and came to an agreement about what would be the appropriate word to use in Arabic. Because the Arabic language is very rich, and that's part of the problem. But there are 22 countries that speak Arabic, and they use modern standard Arabic, which is the Arabic that is used across the whole region in newspapers and broadcasts, but of course, then from one country to the next in day to day language and use it varies -- dialect, colloquialisms, etc. So that was another added layer of complication. So in one sense you had the problem that language wasn't fully ripe, if you like, neologisms were coming up, new expressions.
And so we defined all these terms, and then we had eight correspondents in the region. We submitted the draft to them, they responded back to us. "Yes, we understand the definition. We agree with it, but this is what we say in our country." Because we were not going to harmonize or force harmonization. We were trying to facilitate understanding among people. So in yellow, you see the different expressions in use in the various countries.
So this, I'm happy to say, it took three years to produce this because we also finalized the draft and took it actually into the field, sat with the election commissions in all these different countries, debated and defined and refined the draft, and finally published it in November 2014 in Cairo. And it's gone a long way. We published 10,000 copies. To date, there's about 3,000 downloads off the internet in PDF form. I heard just recently from a colleague that they've taken it up in Somalia. They're going to produce a version of this in Somalia, because there's nothing in Somalia at all. So that's very good to know. And this newly formed Arab Organization for Electoral Management Bodies, which is trying to professionalize how elections are run in the region, they're using it as well. And the Arab League have now built up a pan-Arab observation unit, and they're using it. So that's all really good.
However, this work of reference is quite high-pitched. It's complex, and a lot of the terms are quite technical, so the average person probably doesn't need to know at least a third of it. But the people of the Middle East have been deprived of any form of what we know as civic education. It's part of our curriculum at school. It doesn't really exist in that part of the world, and I feel it's really the right of everybody to know how these things work. And it's a good thing to think about producing a work of reference for the average person, and bearing in mind that now we have a basis to work with, but also we have technology, so we can reach out using telephone apps, video, animation. There's all sorts of tools that can be used now to communicate these ideas to people for the first time in their own language.
We hear a lot of misery about the Middle East. We hear the chaos of war. We hear terrorism. We hear about sectarianism and all this horrible negative news that comes to us all the time. What we're not hearing is what are the people, the everyday people, thinking? What are they aspiring to? Let's give them the means, let's give them the words. The silent majority is silent because they don't have the words. The silent majority needs to know. It is time to provide people with the knowledge tools that they can inform themselves with.
The silent majority does not need to be silent. Let's help them have a voice.
Thank you very much.
(Applause)
SOC/GralInt-TED talks-Rachel Botsman: We've stopped trusting institutions and started trusting strangers
The following information is used for educational purposes only.
Filmed June 2016 at TEDSummit
Rachel Botsman: We've stopped trusting institutions and started trusting strangers
Something profound is changing our concept of trust, says Rachel Botsman. While we used to place our trust in institutions like governments and banks, today we increasingly rely on others, often strangers, on platforms like Airbnb and Uber and through technologies like the blockchain. This new era of trust could bring with it a more transparent, inclusive and accountable society — if we get it right. Who do you trust?
Transcript:
Let's talk about trust. We all know trust is fundamental, but when it comes to trusting people, something profound is happening.
Please raise your hand if you have ever been a host or a guest on Airbnb. Wow. That's a lot of you.
Who owns Bitcoin? Still a lot of you. OK.
And please raise your hand if you've ever used Tinder to help you find a mate.
(Laughter)
This one's really hard to count because you're kind of going like this.
(Laughter)
These are all examples of how technology is creating new mechanisms that are enabling us to trust unknown people, companies and ideas. And yet at the same time, trust in institutions -- banks, governments and even churches -- is collapsing. So what's happening here, and who do you trust?
Let's start in France with a platform -- with a company, I should say -- with a rather funny-sounding name, BlaBlaCar. It's a platform that matches drivers and passengers who want to share long-distance journeys together. The average ride taken is 320 kilometers. So it's a good idea to choose your fellow travelers wisely. Social profiles and reviews help people make a choice. You can see if someone's a smoker, you can see what kind of music they like, you can see if they're going to bring their dog along for the ride. But it turns out that the key social identifier is how much you're going to talk in the car.
(Laughter)
Bla, not a lot, bla bla, you want a nice bit of chitchat, and bla bla bla, you're not going to stop talking the entire way from London to Paris.
(Laughter)
It's remarkable, right, that this idea works at all, because it's counter to the lesson most of us were taught as a child: never get in a car with a stranger. And yet, BlaBlaCar transports more than four million people every single month. To put that in context, that's more passengers than the Eurostar or JetBlue airlines carry. BlaBlaCar is a beautiful illustration of how technology is enabling millions of people across the world to take a trust leap.
A trust leap happens when we take the risk to do something new or different to the way that we've always done it. Let's try to visualize this together. OK. I want you to close your eyes. There is a man staring at me with his eyes wide open. I'm on this big red circle. I can see. So close your eyes.
(Laughter) (Applause)
I'll do it with you. And I want you to imagine there exists a gap between you and something unknown. That unknown can be someone you've just met. It can be a place you've never been to. It can be something you've never tried before. You got it? OK. You can open your eyes now. For you to leap from a place of certainty, to take a chance on that someone or something unknown, you need a force to pull you over the gap, and that remarkable force is trust.
Trust is an elusive concept, and yet we depend on it for our lives to function. I trust my children when they say they're going to turn the lights out at night. I trusted the pilot who flew me here to keep me safe. It's a word we use a lot, without always thinking about what it really means and how it works in different contexts of our lives.
There are, in fact, hundreds of definitions of trust, and most can be reduced to some kind of risk assessment of how likely it is that things will go right. But I don't like this definition of trust, because it makes trust sound rational and predictable, and it doesn't really get to the human essence of what it enables us to do and how it empowers us to connect with other people.
So I define trust a little differently. I define trust as a confident relationship to the unknown. Now, when you view trust through this lens, it starts to explain why it has the unique capacity to enable us to cope with uncertainty, to place our faith in strangers, to keep moving forward.
Human beings are remarkable at taking trust leaps. Do you remember the first time you put your credit card details into a website? That's a trust leap. I distinctly remember telling my dad that I wanted to buy a navy blue secondhand Peugeot on eBay, and he rightfully pointed out that the seller's name was "Invisible Wizard" and that this probably was not such a good idea.
(Laughter)
So my work, my research focuses on how technology is transforming the social glue of society, trust between people, and it's a fascinating area to study, because there's still so much we do not know. For instance, do men and women trust differently in digital environments? Does the way we build trust face-to-face translate online? Does trust transfer? So if you trust finding a mate on Tinder, are you more likely to trust finding a ride on BlaBlaCar?
But from studying hundreds of networks and marketplaces, there is a common pattern that people follow, and I call it "climbing the trust stack." Let me use BlaBlaCar as an example to bring it to life. On the first level, you have to trust the idea. So you have to trust the idea of ride-sharing is safe and worth trying. The second level is about having confidence in the platform, that BlaBlaCar will help you if something goes wrong. And the third level is about using little bits of information to decide whether the other person is trustworthy.
Now, the first time we climb the trust stack, it feels weird, even risky, but we get to a point where these ideas seem totally normal. Our behaviors transform, often relatively quickly. In other words, trust enables change and innovation.
So an idea that intrigued me, and I'd like you to consider, is whether we can better understand major ways of disruption and change in individuals in society through the lens of trust. Well, it turns out that trust has only evolved in three significant chapters throughout the course of human history: local, institutional and what we're now entering, distributed.
So for a long time, until the mid-1800s, trust was built around tight-knit relationships. So say I lived in a village with the first five rows of this audience, and we all knew one another, and say I wanted to borrow money. The man who had his eyes wide open, he might lend it to me, and if I didn't pay him back, you'd all know I was dodgy. I would get a bad reputation, and you would refuse to do business with me in the future. Trust was mostly local and accountability-based.
In the mid-19th century, society went through a tremendous amount of change. People moved to fast-growing cities such as London and San Francisco, and a local banker here was replaced by large corporations that didn't know us as individuals. We started to place our trust into black box systems of authority, things like legal contracts and regulation and insurance, and less trust directly in other people. Trust became institutional and commission-based.
It's widely talked about how trust in institutions and many corporate brands has been steadily declining and continues to do so. I am constantly stunned by major breaches of trust: the News Corp phone hacking, the Volkswagen emissions scandal, the widespread abuse in the Catholic Church, the fact that only one measly banker went to jail after the great financial crisis, or more recently the Panama Papers that revealed how the rich can exploit offshore tax regimes. And the thing that really surprises me is why do leaders find it so hard to apologize, I mean sincerely apologize, when our trust is broken?
It would be easy to conclude that institutional trust isn't working because we are fed up with the sheer audacity of dishonest elites, but what's happening now runs deeper than the rampant questioning of the size and structure of institutions. We're starting to realize that institutional trust wasn't designed for the digital age. Conventions of how trust is built, managed, lost and repaired -- in brands, leaders and entire systems -- is being turned upside down.
Now, this is exciting, but it's frightening, because it forces many of us to have to rethink how trust is built and destroyed with our customers, with our employees, even our loved ones.
The other day, I was talking to the CEO of a leading international hotel brand, and as is often the case, we got onto the topic of Airbnb. And he admitted to me that he was perplexed by their success. He was perplexed at how a company that depends on the willingness of strangers to trust one another could work so well across 191 countries. So I said to him that I had a confession to make, and he looked at me a bit strangely, and I said -- and I'm sure many of you do this as well -- I don't always bother to hang my towels up when I'm finished in the hotel, but I would never do this as a guest on Airbnb. And the reason why I would never do this as a guest on Airbnb is because guests know that they'll be rated by hosts, and that those ratings are likely to impact their ability to transact in the future. It's a simple illustration of how online trust will change our behaviors in the real world, make us more accountable in ways we cannot yet even imagine.
I am not saying we do not need hotels or traditional forms of authority. But what we cannot deny is that the way trust flows through society is changing, and it's creating this big shift away from the 20th century that was defined by institutional trust towards the 21st century that will be fueled by distributed trust. Trust is no longer top-down. It's being unbundled and inverted. It's no longer opaque and linear. A new recipe for trust is emerging that once again is distributed amongst people and is accountability-based.
And this shift is only going to accelerate with the emergence of the blockchain, the innovative ledger technology underpinning Bitcoin. Now let's be honest, getting our heads around the way blockchain works is mind-blowing. And one of the reasons why is it involves processing some pretty complicated concepts with terrible names. I mean, cryptographic algorithms and hash functions, and people called miners, who verify transactions -- all that was created by this mysterious person or persons called Satoshi Nakamoto. Now, that is a massive trust leap that hasn't happened yet.
(Applause)
But let's try to imagine this. So "The Economist" eloquently described the blockchain as the great chain of being sure about things. The easiest way I can describe it is imagine the blocks as spreadsheets, and they are filled with assets. So that could be a property title. It could be a stock trade. It could be a creative asset, such as the rights to a song. Every time something moves from one place on the register to somewhere else, that asset transfer is time-stamped and publicly recorded on the blockchain. It's that simple. Right.
So the real implication of the blockchain is that it removes the need for any kind of third party, such as a lawyer, or a trusted intermediary, or maybe not a government intermediary to facilitate the exchange. So if we go back to the trust stack, you still have to trust the idea, you have to trust the platform, but you don't have to trust the other person in the traditional sense.
The implications are huge. In the same way the internet blew open the doors to an age of information available to everyone, the blockchain will revolutionize trust on a global scale.
Now, I've waited to the end intentionally to mention Uber, because I recognize that it is a contentious and widely overused example, but in the context of a new era of trust, it's a great case study. Now, we will see cases of abuse of distributed trust. We've already seen this, and it can go horribly wrong. I am not surprised that we are seeing protests from taxi associations all around the world trying to get governments to ban Uber based on claims that it is unsafe. I happened to be in London the day that these protests took place, and I happened to notice a tweet from Matt Hancock, who is a British minister for business.
And he wrote, "Does anyone have details of this #Uber app everyone's talking about?
(Laughter)
I'd never heard of it until today."
Now, the taxi associations, they legitimized the first layer of the trust stack. They legitimized the idea that they were trying to eliminate, and sign-ups increased by 850 percent in 24 hours. Now, this is a really strong illustration of how once a trust shift has happened around a behavior or an entire sector, you cannot reverse the story. Every day, five million people will take a trust leap and ride with Uber. In China, on Didi, the ride-sharing platform, 11 million rides taken every day. That's 127 rides per second, showing that this is a cross-cultural phenomenon.
And the fascinating thing is that both drivers and passengers report that seeing a name and seeing someone's photo and their rating makes them feel safer, and as you may have experienced, even behave a little more nicely in the taxi cab. Uber and Didi are early but powerful examples of how technology is creating trust between people in ways and on a scale never possible before.
Today, many of us are comfortable getting into cars driven by strangers. We meet up with someone we swiped right to be matched with. We share our homes with people we do not know.
This is just the beginning, because the real disruption happening isn't technological. It's the trust shift it creates, and for my part, I want to help people understand this new era of trust so that we can get it right and we can embrace the opportunities to redesign systems that are more transparent, inclusive and accountable.
Thank you very much.
(Applause)
Thank you.
(Applause)
Filmed June 2016 at TEDSummit
Rachel Botsman: We've stopped trusting institutions and started trusting strangers
Something profound is changing our concept of trust, says Rachel Botsman. While we used to place our trust in institutions like governments and banks, today we increasingly rely on others, often strangers, on platforms like Airbnb and Uber and through technologies like the blockchain. This new era of trust could bring with it a more transparent, inclusive and accountable society — if we get it right. Who do you trust?
Transcript:
Let's talk about trust. We all know trust is fundamental, but when it comes to trusting people, something profound is happening.
Please raise your hand if you have ever been a host or a guest on Airbnb. Wow. That's a lot of you.
Who owns Bitcoin? Still a lot of you. OK.
And please raise your hand if you've ever used Tinder to help you find a mate.
(Laughter)
This one's really hard to count because you're kind of going like this.
(Laughter)
These are all examples of how technology is creating new mechanisms that are enabling us to trust unknown people, companies and ideas. And yet at the same time, trust in institutions -- banks, governments and even churches -- is collapsing. So what's happening here, and who do you trust?
Let's start in France with a platform -- with a company, I should say -- with a rather funny-sounding name, BlaBlaCar. It's a platform that matches drivers and passengers who want to share long-distance journeys together. The average ride taken is 320 kilometers. So it's a good idea to choose your fellow travelers wisely. Social profiles and reviews help people make a choice. You can see if someone's a smoker, you can see what kind of music they like, you can see if they're going to bring their dog along for the ride. But it turns out that the key social identifier is how much you're going to talk in the car.
(Laughter)
Bla, not a lot, bla bla, you want a nice bit of chitchat, and bla bla bla, you're not going to stop talking the entire way from London to Paris.
(Laughter)
It's remarkable, right, that this idea works at all, because it's counter to the lesson most of us were taught as a child: never get in a car with a stranger. And yet, BlaBlaCar transports more than four million people every single month. To put that in context, that's more passengers than the Eurostar or JetBlue airlines carry. BlaBlaCar is a beautiful illustration of how technology is enabling millions of people across the world to take a trust leap.
A trust leap happens when we take the risk to do something new or different to the way that we've always done it. Let's try to visualize this together. OK. I want you to close your eyes. There is a man staring at me with his eyes wide open. I'm on this big red circle. I can see. So close your eyes.
(Laughter) (Applause)
I'll do it with you. And I want you to imagine there exists a gap between you and something unknown. That unknown can be someone you've just met. It can be a place you've never been to. It can be something you've never tried before. You got it? OK. You can open your eyes now. For you to leap from a place of certainty, to take a chance on that someone or something unknown, you need a force to pull you over the gap, and that remarkable force is trust.
Trust is an elusive concept, and yet we depend on it for our lives to function. I trust my children when they say they're going to turn the lights out at night. I trusted the pilot who flew me here to keep me safe. It's a word we use a lot, without always thinking about what it really means and how it works in different contexts of our lives.
There are, in fact, hundreds of definitions of trust, and most can be reduced to some kind of risk assessment of how likely it is that things will go right. But I don't like this definition of trust, because it makes trust sound rational and predictable, and it doesn't really get to the human essence of what it enables us to do and how it empowers us to connect with other people.
So I define trust a little differently. I define trust as a confident relationship to the unknown. Now, when you view trust through this lens, it starts to explain why it has the unique capacity to enable us to cope with uncertainty, to place our faith in strangers, to keep moving forward.
Human beings are remarkable at taking trust leaps. Do you remember the first time you put your credit card details into a website? That's a trust leap. I distinctly remember telling my dad that I wanted to buy a navy blue secondhand Peugeot on eBay, and he rightfully pointed out that the seller's name was "Invisible Wizard" and that this probably was not such a good idea.
(Laughter)
So my work, my research focuses on how technology is transforming the social glue of society, trust between people, and it's a fascinating area to study, because there's still so much we do not know. For instance, do men and women trust differently in digital environments? Does the way we build trust face-to-face translate online? Does trust transfer? So if you trust finding a mate on Tinder, are you more likely to trust finding a ride on BlaBlaCar?
But from studying hundreds of networks and marketplaces, there is a common pattern that people follow, and I call it "climbing the trust stack." Let me use BlaBlaCar as an example to bring it to life. On the first level, you have to trust the idea. So you have to trust the idea of ride-sharing is safe and worth trying. The second level is about having confidence in the platform, that BlaBlaCar will help you if something goes wrong. And the third level is about using little bits of information to decide whether the other person is trustworthy.
Now, the first time we climb the trust stack, it feels weird, even risky, but we get to a point where these ideas seem totally normal. Our behaviors transform, often relatively quickly. In other words, trust enables change and innovation.
So an idea that intrigued me, and I'd like you to consider, is whether we can better understand major ways of disruption and change in individuals in society through the lens of trust. Well, it turns out that trust has only evolved in three significant chapters throughout the course of human history: local, institutional and what we're now entering, distributed.
So for a long time, until the mid-1800s, trust was built around tight-knit relationships. So say I lived in a village with the first five rows of this audience, and we all knew one another, and say I wanted to borrow money. The man who had his eyes wide open, he might lend it to me, and if I didn't pay him back, you'd all know I was dodgy. I would get a bad reputation, and you would refuse to do business with me in the future. Trust was mostly local and accountability-based.
In the mid-19th century, society went through a tremendous amount of change. People moved to fast-growing cities such as London and San Francisco, and a local banker here was replaced by large corporations that didn't know us as individuals. We started to place our trust into black box systems of authority, things like legal contracts and regulation and insurance, and less trust directly in other people. Trust became institutional and commission-based.
It's widely talked about how trust in institutions and many corporate brands has been steadily declining and continues to do so. I am constantly stunned by major breaches of trust: the News Corp phone hacking, the Volkswagen emissions scandal, the widespread abuse in the Catholic Church, the fact that only one measly banker went to jail after the great financial crisis, or more recently the Panama Papers that revealed how the rich can exploit offshore tax regimes. And the thing that really surprises me is why do leaders find it so hard to apologize, I mean sincerely apologize, when our trust is broken?
It would be easy to conclude that institutional trust isn't working because we are fed up with the sheer audacity of dishonest elites, but what's happening now runs deeper than the rampant questioning of the size and structure of institutions. We're starting to realize that institutional trust wasn't designed for the digital age. Conventions of how trust is built, managed, lost and repaired -- in brands, leaders and entire systems -- is being turned upside down.
Now, this is exciting, but it's frightening, because it forces many of us to have to rethink how trust is built and destroyed with our customers, with our employees, even our loved ones.
The other day, I was talking to the CEO of a leading international hotel brand, and as is often the case, we got onto the topic of Airbnb. And he admitted to me that he was perplexed by their success. He was perplexed at how a company that depends on the willingness of strangers to trust one another could work so well across 191 countries. So I said to him that I had a confession to make, and he looked at me a bit strangely, and I said -- and I'm sure many of you do this as well -- I don't always bother to hang my towels up when I'm finished in the hotel, but I would never do this as a guest on Airbnb. And the reason why I would never do this as a guest on Airbnb is because guests know that they'll be rated by hosts, and that those ratings are likely to impact their ability to transact in the future. It's a simple illustration of how online trust will change our behaviors in the real world, make us more accountable in ways we cannot yet even imagine.
I am not saying we do not need hotels or traditional forms of authority. But what we cannot deny is that the way trust flows through society is changing, and it's creating this big shift away from the 20th century that was defined by institutional trust towards the 21st century that will be fueled by distributed trust. Trust is no longer top-down. It's being unbundled and inverted. It's no longer opaque and linear. A new recipe for trust is emerging that once again is distributed amongst people and is accountability-based.
And this shift is only going to accelerate with the emergence of the blockchain, the innovative ledger technology underpinning Bitcoin. Now let's be honest, getting our heads around the way blockchain works is mind-blowing. And one of the reasons why is it involves processing some pretty complicated concepts with terrible names. I mean, cryptographic algorithms and hash functions, and people called miners, who verify transactions -- all that was created by this mysterious person or persons called Satoshi Nakamoto. Now, that is a massive trust leap that hasn't happened yet.
(Applause)
But let's try to imagine this. So "The Economist" eloquently described the blockchain as the great chain of being sure about things. The easiest way I can describe it is imagine the blocks as spreadsheets, and they are filled with assets. So that could be a property title. It could be a stock trade. It could be a creative asset, such as the rights to a song. Every time something moves from one place on the register to somewhere else, that asset transfer is time-stamped and publicly recorded on the blockchain. It's that simple. Right.
So the real implication of the blockchain is that it removes the need for any kind of third party, such as a lawyer, or a trusted intermediary, or maybe not a government intermediary to facilitate the exchange. So if we go back to the trust stack, you still have to trust the idea, you have to trust the platform, but you don't have to trust the other person in the traditional sense.
The implications are huge. In the same way the internet blew open the doors to an age of information available to everyone, the blockchain will revolutionize trust on a global scale.
Now, I've waited to the end intentionally to mention Uber, because I recognize that it is a contentious and widely overused example, but in the context of a new era of trust, it's a great case study. Now, we will see cases of abuse of distributed trust. We've already seen this, and it can go horribly wrong. I am not surprised that we are seeing protests from taxi associations all around the world trying to get governments to ban Uber based on claims that it is unsafe. I happened to be in London the day that these protests took place, and I happened to notice a tweet from Matt Hancock, who is a British minister for business.
And he wrote, "Does anyone have details of this #Uber app everyone's talking about?
(Laughter)
I'd never heard of it until today."
Now, the taxi associations, they legitimized the first layer of the trust stack. They legitimized the idea that they were trying to eliminate, and sign-ups increased by 850 percent in 24 hours. Now, this is a really strong illustration of how once a trust shift has happened around a behavior or an entire sector, you cannot reverse the story. Every day, five million people will take a trust leap and ride with Uber. In China, on Didi, the ride-sharing platform, 11 million rides taken every day. That's 127 rides per second, showing that this is a cross-cultural phenomenon.
And the fascinating thing is that both drivers and passengers report that seeing a name and seeing someone's photo and their rating makes them feel safer, and as you may have experienced, even behave a little more nicely in the taxi cab. Uber and Didi are early but powerful examples of how technology is creating trust between people in ways and on a scale never possible before.
Today, many of us are comfortable getting into cars driven by strangers. We meet up with someone we swiped right to be matched with. We share our homes with people we do not know.
This is just the beginning, because the real disruption happening isn't technological. It's the trust shift it creates, and for my part, I want to help people understand this new era of trust so that we can get it right and we can embrace the opportunities to redesign systems that are more transparent, inclusive and accountable.
Thank you very much.
(Applause)
Thank you.
(Applause)
HEALTH/GralInt-TED Talks-Todd Coleman: A temporary tattoo that brings hospital care to the home
The following information is used for educational purposes only.
Filmed November 2015 at TEDMED 2015
Todd Coleman: A temporary tattoo that brings hospital care to the home
What if doctors could monitor patients at home with the same degree of accuracy they'd get during a stay at the hospital? Bioelectronics innovator Todd Coleman shares his quest to develop wearable, flexible electronic health monitoring patches that promise to revolutionize healthcare and make medicine less invasive.
Transcript:
Please meet Jane. She has a high-risk pregnancy. Within 24 weeks, she's on bed rest at the hospital, being monitored for her preterm contractions.
She doesn't look the happiest. That's in part because it requires technicians and experts to apply these clunky belts on her to monitor her uterine contractions. Another reason Jane is not so happy is because she's worried. In particular, she's worried about what happens after her 10-day stay on bed rest at the hospital. What happens when she's home? If she were to give birth this early it would be devastating. As an African-American woman, she's twice as likely to have a premature birth or to have a stillbirth. So Jane basically has one of two options: stay at the hospital on bed rest, a prisoner to the technology until she gives birth, and then spend the rest of her life paying for the bill; or head home after her 10-day stay and hope for the best. Neither of these two options seems appealing.
As I began to think about stories like this and hear about stories like this, I began to ask myself and imagine: Is there an alternative? Is there a way we could have the benefits of high-fidelity monitoring that we get with our trusted partners in the hospital while someone is at home living their daily life?
With that in mind, I encouraged people in my research group to partner with some clever material scientists, and all of us came together and brainstormed. And after a long process, we came up with a vision, an idea, of a wearable system that perhaps you could wear like a piece of jewelry or you could apply to yourself like a Band-Aid. And after many trials and tribulations and years of endeavors, we were able to come up with this flexible electronic patch that was manufactured using the same processes that they use to build computer chips, except the electronics are transferred from a semiconductor wafer onto a flexible material that can interface with the human body.
These systems are about the thickness of a human hair. They can measure the types of information that we want, things such as: bodily movement, bodily temperature, electrical rhythms of the body and so forth. We can also engineer these systems, so they can integrate energy sources, and can have wireless transmission capabilities.
So as we began to build these types of systems, we began to test them on ourselves in our research group. But in addition, we began to reach out to some of our clinical partners in San Diego, and test these on different patients in different clinical conditions, including moms-to-be like Jane.
Here is a picture of a pregnant woman in labor at our university hospital being monitored for her uterine contractions with the conventional belt. In addition, our flexible electronic patches are there. This picture demonstrates waveforms pertaining to the fetal heart rate, where the red corresponds to what was acquired with the conventional belts, and the blue corresponds to our estimates using our flexible electronic systems and our algorithms.
At this moment, we gave ourselves a big mental high five. Some of the things we had imagined were beginning to come to fruition, and we were actually seeing this in a clinical context.
But there was still a problem. The problem was, the way we manufactured these systems was very inefficient, had low yield and was very error-prone. In addition, as we talked to some of the nurses in the hospital, they encouraged us to make sure that our electronics worked with typical medical adhesives that are used in a hospital. We had an epiphany and said, "Wait a minute. Rather than just making them work with adhesives, let's integrate them into adhesives, and that could solve our manufacturing problem."
This picture that you see here is our ability to embed these censors inside of a piece of Scotch tape by simply peeling it off of a wafer. Ongoing work in our research group allows us to, in addition, embed integrated circuits into the flexible adhesives to do things like amplifying signals and digitizing them, processing them and encoding for wireless transmission. All of this integrated into the same medical adhesives that are used in the hospital.
So when we reached this point, we had some other challenges, from both an engineering as well as a usability perspective, to make sure that we could make it used practically.
In many digital health discussions, people believe in and embrace the idea that we can simply digitize the data, wirelessly transmit it, send it to the cloud, and in the cloud, we can extract meaningful information for interpretation. And indeed, you can do all of that, if you're not worried about some of the energy challenges. Think about Jane for a moment. She doesn't live in Palo Alto, nor does she live in Beverly Hills. What that means is, we have to be mindful about her data plan and how much it would cost for her to be sending out a continuous stream of data.
There's another challenge that not everyone in the medical profession is comfortable talking about. And that is, that Jane does not have the most trust in the medical establishment. She, people like her, her ancestors, have not had the best experiences at the hands of doctors and the hospital or insurance companies. That means that we have to be mindful of questions of privacy. Jane might not feel that happy about all that data being processed into the cloud. And Jane cannot be fooled; she reads the news. She knows that if the federal government can be hacked, if the Fortune 500 can be hacked, so can her doctor.
And so with that in mind, we had an epiphany. We cannot outsmart all the hackers in the world, but perhaps we can present them a smaller target. What if we could actually, rather than have those algorithms that do data interpretation run in the cloud, what if we have those algorithms run on those small integrated circuits embedded into those adhesives?
And so when we integrate these things together, what this means is that now we can think about the future where someone like Jane can still go about living her normal daily life, she can be monitored, it can be done in a way where she doesn't have to get another job to pay her data plan, and we can also address some of her concerns about privacy.
So at this point, we're feeling very good about ourselves. We've accomplished this, we've begun to address some of these questions about privacy and we feel like, pretty much the chapter is closed now. Everyone lived happily ever after, right? Well, not so fast.
(Laughter)
One of the things we have to remember, as I mentioned earlier, is that Jane does not have the most trust in the medical establishment. We have to remember that there are increasing and widening health disparities, and there's inequity in terms of proper care management. And so what that means is that this simple picture of Jane and her data -- even with her being comfortable being wirelessly transmitted to the cloud, letting a doctor intervene if necessary -- is not the whole story.
So what we're beginning to do is to think about ways to have trusted parties serve as intermediaries between people like Jane and her health care providers. For example, we've begun to partner with churches and to think about nurses that are church members, that come from that trusted community, as patient advocates and health coaches to people like Jane.
Another thing we have going for us is that insurance companies, increasingly, are attracted to some of these ideas. They're increasingly realizing that perhaps it's better to pay one dollar now for a wearable device and a health coach, rather than paying 10 dollars later, when that baby is born prematurely and ends up in the neonatal intensive care unit -- one of the most expensive parts of a hospital.
This has been a long learning process for us. This iterative process of breaking through and attacking one problem and not feeling totally comfortable, and identifying the next problem, has helped us go along this path of actually trying to not only innovate with this technology but make sure it can be used for people who perhaps need it the most.
Another learning lesson we've taken from this process that is very humbling, is that as technology progresses and advances at an accelerating rate, we have to remember that human beings are using this technology, and we have to be mindful that these human beings -- they have a face, they have a name and a life. And in the case of Jane, hopefully, two.
Thank you.
(Applause)
Filmed November 2015 at TEDMED 2015
Todd Coleman: A temporary tattoo that brings hospital care to the home
What if doctors could monitor patients at home with the same degree of accuracy they'd get during a stay at the hospital? Bioelectronics innovator Todd Coleman shares his quest to develop wearable, flexible electronic health monitoring patches that promise to revolutionize healthcare and make medicine less invasive.
Transcript:
Please meet Jane. She has a high-risk pregnancy. Within 24 weeks, she's on bed rest at the hospital, being monitored for her preterm contractions.
She doesn't look the happiest. That's in part because it requires technicians and experts to apply these clunky belts on her to monitor her uterine contractions. Another reason Jane is not so happy is because she's worried. In particular, she's worried about what happens after her 10-day stay on bed rest at the hospital. What happens when she's home? If she were to give birth this early it would be devastating. As an African-American woman, she's twice as likely to have a premature birth or to have a stillbirth. So Jane basically has one of two options: stay at the hospital on bed rest, a prisoner to the technology until she gives birth, and then spend the rest of her life paying for the bill; or head home after her 10-day stay and hope for the best. Neither of these two options seems appealing.
As I began to think about stories like this and hear about stories like this, I began to ask myself and imagine: Is there an alternative? Is there a way we could have the benefits of high-fidelity monitoring that we get with our trusted partners in the hospital while someone is at home living their daily life?
With that in mind, I encouraged people in my research group to partner with some clever material scientists, and all of us came together and brainstormed. And after a long process, we came up with a vision, an idea, of a wearable system that perhaps you could wear like a piece of jewelry or you could apply to yourself like a Band-Aid. And after many trials and tribulations and years of endeavors, we were able to come up with this flexible electronic patch that was manufactured using the same processes that they use to build computer chips, except the electronics are transferred from a semiconductor wafer onto a flexible material that can interface with the human body.
These systems are about the thickness of a human hair. They can measure the types of information that we want, things such as: bodily movement, bodily temperature, electrical rhythms of the body and so forth. We can also engineer these systems, so they can integrate energy sources, and can have wireless transmission capabilities.
So as we began to build these types of systems, we began to test them on ourselves in our research group. But in addition, we began to reach out to some of our clinical partners in San Diego, and test these on different patients in different clinical conditions, including moms-to-be like Jane.
Here is a picture of a pregnant woman in labor at our university hospital being monitored for her uterine contractions with the conventional belt. In addition, our flexible electronic patches are there. This picture demonstrates waveforms pertaining to the fetal heart rate, where the red corresponds to what was acquired with the conventional belts, and the blue corresponds to our estimates using our flexible electronic systems and our algorithms.
At this moment, we gave ourselves a big mental high five. Some of the things we had imagined were beginning to come to fruition, and we were actually seeing this in a clinical context.
But there was still a problem. The problem was, the way we manufactured these systems was very inefficient, had low yield and was very error-prone. In addition, as we talked to some of the nurses in the hospital, they encouraged us to make sure that our electronics worked with typical medical adhesives that are used in a hospital. We had an epiphany and said, "Wait a minute. Rather than just making them work with adhesives, let's integrate them into adhesives, and that could solve our manufacturing problem."
This picture that you see here is our ability to embed these censors inside of a piece of Scotch tape by simply peeling it off of a wafer. Ongoing work in our research group allows us to, in addition, embed integrated circuits into the flexible adhesives to do things like amplifying signals and digitizing them, processing them and encoding for wireless transmission. All of this integrated into the same medical adhesives that are used in the hospital.
So when we reached this point, we had some other challenges, from both an engineering as well as a usability perspective, to make sure that we could make it used practically.
In many digital health discussions, people believe in and embrace the idea that we can simply digitize the data, wirelessly transmit it, send it to the cloud, and in the cloud, we can extract meaningful information for interpretation. And indeed, you can do all of that, if you're not worried about some of the energy challenges. Think about Jane for a moment. She doesn't live in Palo Alto, nor does she live in Beverly Hills. What that means is, we have to be mindful about her data plan and how much it would cost for her to be sending out a continuous stream of data.
There's another challenge that not everyone in the medical profession is comfortable talking about. And that is, that Jane does not have the most trust in the medical establishment. She, people like her, her ancestors, have not had the best experiences at the hands of doctors and the hospital or insurance companies. That means that we have to be mindful of questions of privacy. Jane might not feel that happy about all that data being processed into the cloud. And Jane cannot be fooled; she reads the news. She knows that if the federal government can be hacked, if the Fortune 500 can be hacked, so can her doctor.
And so with that in mind, we had an epiphany. We cannot outsmart all the hackers in the world, but perhaps we can present them a smaller target. What if we could actually, rather than have those algorithms that do data interpretation run in the cloud, what if we have those algorithms run on those small integrated circuits embedded into those adhesives?
And so when we integrate these things together, what this means is that now we can think about the future where someone like Jane can still go about living her normal daily life, she can be monitored, it can be done in a way where she doesn't have to get another job to pay her data plan, and we can also address some of her concerns about privacy.
So at this point, we're feeling very good about ourselves. We've accomplished this, we've begun to address some of these questions about privacy and we feel like, pretty much the chapter is closed now. Everyone lived happily ever after, right? Well, not so fast.
(Laughter)
One of the things we have to remember, as I mentioned earlier, is that Jane does not have the most trust in the medical establishment. We have to remember that there are increasing and widening health disparities, and there's inequity in terms of proper care management. And so what that means is that this simple picture of Jane and her data -- even with her being comfortable being wirelessly transmitted to the cloud, letting a doctor intervene if necessary -- is not the whole story.
So what we're beginning to do is to think about ways to have trusted parties serve as intermediaries between people like Jane and her health care providers. For example, we've begun to partner with churches and to think about nurses that are church members, that come from that trusted community, as patient advocates and health coaches to people like Jane.
Another thing we have going for us is that insurance companies, increasingly, are attracted to some of these ideas. They're increasingly realizing that perhaps it's better to pay one dollar now for a wearable device and a health coach, rather than paying 10 dollars later, when that baby is born prematurely and ends up in the neonatal intensive care unit -- one of the most expensive parts of a hospital.
This has been a long learning process for us. This iterative process of breaking through and attacking one problem and not feeling totally comfortable, and identifying the next problem, has helped us go along this path of actually trying to not only innovate with this technology but make sure it can be used for people who perhaps need it the most.
Another learning lesson we've taken from this process that is very humbling, is that as technology progresses and advances at an accelerating rate, we have to remember that human beings are using this technology, and we have to be mindful that these human beings -- they have a face, they have a name and a life. And in the case of Jane, hopefully, two.
Thank you.
(Applause)
Saturday, October 22, 2016
ENV/GralInt-El primer auto eléctrico nacional
The following information is used for educational purposes only.
El primer auto eléctrico nacional
El Sero Electric es una de las grandes novedades que pueden verse, hasta el domingo, en ExpoBio.
Insonoro, ecológico, compacto y multiuso: esos son algunos de los adjetivos que pueden atribuírsele al Sero Electric, el nuevo vehículo eléctrico que se presenta en la feria ExpoBio de San Isidro. Toda una novedad para el mercado argentino ya que, a partir de diciembre, será el primer auto ecológico nacional a la venta.
La movilidad sustentable tiene un sector importante en el Darwin Multiespacio, en donde conviven vehículos, scooters y bicicletas. Allí, el Sero Electric acapara miradas: un cuatro ruedas que no necesita ni nafta ni gas para movilizarse por la ciudad. Con sólo 2,35 metros de largo, es peculiar imaginarlo maniobrar entre autos de mayor tamaño.
Este producto es el primer vehículo eléctrico hecho en serie en el país: fabricado en La Matanza, pesa sólo 340 kilogramos y puede alcanzar una velocidad máxima de 45 kilómetros por hora. “Tiene caja automática con marcha adelante, marcha atrás y punto muerto. Lo fabricamos en dos versiones, sedán y camioneta.
Sus precios oscilan entre los 160 y los 210 mil pesos”, puntualiza Pablo Naya, director de proyecto. Hace cuatro años que el Sero está en desarrollo, tomando como inspiración varios diseños italianos. “De los 150 vehículos que fabricamos, ya se reservaron la mitad. Lo estaremos lanzando en diciembre”, adelanta Naya.
Por otro lado, la firma Renault comercializará -entre fin de año y principio de 2017- el Kangoo Z.E., ya homologado en el país. Y un segundo vehículo, el Twizy, para 2017, habilitado para predios cerrados.
Uno de los problemas de la movilidad eléctrica es el rendimiento de las baterías. En el Sero Electric la carga se realiza entre unas seis o siete horas. “Usamos las baterías de ácido gel que nos da un rendimiento de 65 y 70 kilómetros”, explica.
Otro de los inconvenientes es que aún no está legislada la utilización de estos vehículos en la vía pública, pero sí en ámbitos privados como parques industriales, centros comerciales o barrios cerrados. “Lo cual no significa que esté prohibido usarlos. Entran en la categoría L6 y L7; sólo falta una firma de la Secretaría de Transporte para aprobar la categoría L6e, que apunta a estos productos. Enviamos autos a Brasil y Chile, que tienen el OK, pero en Argentina, aún no”, cierra Naya.
En ExpoBio también podrán disfrutar los amantes de las dos y tres ruedas. La firma E-Trotter ensambla sistemas eléctricos sobre marcos de bicicletas tradicionales, y está presentando un modelo vintage versión eléctrica. “Te damos 25, 30 y 40 kilómetros de autonomía, con motores de 180 y 250 watts. El montaje más económico sale 17 mil pesos”, dice su presidente Marcelo Arrúa.
También se exhiben bicicletas, como la E-Mov, con baterías de litio y motores eléctricos de 350 y 750 watts, que se cargan en 6 horas. Cuestan entre $22.900 y $32.900 y alcanzan los 32 km/h.
En el terreno de las bicicletas de carga también se destaca la Ruffus Cargo Bike, con distintas capacidades y accesorios. Y en el rubro motos, podrán verse y testearse modelos eléctricas y triciclos de Lucky Lion. “Los scooters alcanzan hasta casi 50 km/h, con un gran uso en el mercado de deliveries”, cierra Omar García, vicepresidente de la firma.
________________________________________________________
Comida orgánica y la cultura de lo reciclable
Un mercado de productos orgánicos y diversos stands con las últimas tendencias en diseño sustentable en indumentaria, accesorios para el hogar y muebles son sólo parte de la amplia oferta “verde” que ofrece ExpoBio.
Además habrá propuestas de energías renovables y lo último en arquitectura, en este caso con la presentación de “La casa sustentable”.
Entre otras actividades, figura el primer congreso internacional sobre “Sustentabilidad sin fronteras”, como así también el programa Reciclatec, que incentiva a que los visitantes lleven sus residuos electrónicos para ser reciclados.
¿Para los más chiquitos? ExpoBio Infantil, un parque libre temático.
La muestra incluye un festival de yoga y biodanzas, un ciclo de cine ambiental (a cargo del Green Film Fest) y, para matizar el recorrido, una variada oferta gastronómica en los Eco FoodTrucks.
Los menores de 12 años tendrán ingreso libre.
Lugar: Darwin Multiespacio Hipódromo de San Isidro Dirección: Av. Márquez 504 y Av. Santa Fe 35 (San Isidro).
Horarios: hoy, mañana y domingo de 11 a 21.
Web: expobioargentina.com
Valor de la entrada: $130.
_________________________________________
Textual
“En el mundo, las reparticiones públicas tratan de incentivar a la gente para utilizar estos vehículos. Acá tiene que pasar lo mismo. El Gobierno debe darles ventajas para que la gente se suba a ellos: hay que hacer campañas. Estamos en diálogo con las autoridades y gestionando la categorización del vehículo”, dice Pablo Naya.
Fuente: http://www.clarin.com/deautos/tecnologia/primer-auto-electrico-nacional_0_1672632842.html
ED/SOC/GralInt-“No al Operativo Aprender”: los maestros contra los exámenes
The following information is used for educational purposes only.
“No al Operativo Aprender”: los maestros contra los exámenes
Jorge Lanata
El volante dice: “Si sos docente, no evalúes. Si sos padre, no permitas que evalúen a tus hijos. Si sos alumno, no tenés obligación de rendir”. El título es “No al Operativo Aprender”.
Entre patético y gracioso: no aprender, podría resumirse.
Vivo en un país donde los maestros están en contra de los exámenes, lo que finalmente define una posición de vida.
No es casual que el kirchnerismo haya intervenido el INDEC, más allá de los negocios de algunos y del acuerdo con los acreedores: los datos acorralan al Relato; saber nos vuelve responsables; algo tenemos que hacer, después, con eso que sabemos.
Una vez que los datos están, aun ignorarlos deja de ser una actitud neutral.
Traté en la radio, en vano, de que algún dirigente gremial de los maestros me explicara los por qué de la oposición: fue en vano, afirmaban estar contra “la estandarización” de la encuesta, y a uno de ellos llegué a preguntarle si estaba en contra de la estadística.
¿Cómo evitar la estandarización de una muestra de un millón y medio de personas? Sólo su oposición dogmática a los exámenes -junto a su irresponsabilidad- puede haberlos llevado, durante la “década ganada”, a aprobar alumnos por sugerencia oficial, como sucedió en nombre de la “inclusión”.
En septiembre de 2014, para dar solo un ejemplo del tono de la época, la docente Cecilia Mariztani fue sancionada por ponerle notas bajas a sus alumnos y le pidieron que modificara sus métodos de calificación. “Nuevo Régimen Académico del Nivel Primario” se llamó el credo: eliminación de los aplazos, las materias previas, la no obligatoriedad de compensar a fin de año y la admisión de chicos que se reincorporan al sistema en el grado correspondiente a su edad biológica, eran algunos de los mandamientos. Si a chicos mal alimentados en sus primeros dos años por la economía le sumamos mal educados en sus siguientes quince años por la educación, el resultado de la ecuación parece obvio: más Asignaciones y menos libertad.
La frutilla del cinismo requiere que todo se haga en nombre de la educación pública: el año pasado las escuelas primarias públicas perdieron otros veinte mil alumnos.
A la hora de los argumentos los grupos que militaron contra el examen enarbolaron -quizá sin saberlo- la teoría Bush de la guerra preventiva o “doctrina de acción positiva” que justificó la invasión a Afganistán: Estados Unidos “creía” que existían armas nucleares y por eso invadió. Después las armas no existían, pero ya era tarde.
Los maestros sostenían que el motivo oculto de la encuesta era “privatizar la educación” o, peor, averiguar cuáles eran las escuelas de peor rendimiento para quitarles su presupuesto. Nada permite presumir que algo así suceda: ni declaraciones publicas, ni privadas, ni la experiencia de los años pasados en la gestión de la Ciudad. Pero como todo dogma, no soporta ser puesto a prueba: se actúa, se reza y ya.
En algo los maestros son consecuentes: ellos mismos no soportan las pruebas y saben que sus calificaciones son, en la mayoría, parte de una ficción. Una vez al año, a la hora de llenar la “Hoja de calificación del personal docente”, todos se llevan a casa su diez automático. ¿Si en una clase de treinta chicos, tres no pasan de grado puede ser normal, pero si no pasan de grado quince, no debería repetir también, el docente que les enseñó?
El único motivo que se me ocurre para estar en contra de un examen es que no se confíe en obtener buenos resultados: miedo a que muestre lo que somos.
Argentina, por ejemplo, tiene más estudiantes universitarios que Brasil. Pero menos graduados. En Brasil se gradúa la mitad de los alumnos que ingresan y en Argentina uno de cada cuatro. En Brasil, como en el resto del mundo, hay examen de ingreso. Acá, la clase media prefiere seguir rezando una mentira: que la clase baja tiene acceso; la clase baja ,en realidad, subvenciona a la media a través de impuestos regresivos como el IVA para que los nenes abandonen la carrera.
La exigencia no es necesariamente, de izquierda o derecha: en Ecuador se ingresa a la universidad aprobando un examen por arriba de los 555 puntos, pero para las carreras de Medicina y Docencia los puntos mínimos son 800.
Brasil, Chile, Cuba, Ecuador, Colombia, Venezuela toman su correspondiente examen.
Pero lo peor del Día de Resistencia Revolucionaria al Examen no fue nada de lo anterior.
-“Choripanero empoderado”, firmó con su letra despareja uno de los alumnos encuestados.
-“Qué sentís cuando usas la computadora en la escuela? Orgullo por la inclusión social”, escribió otro.
Uno inventó pregunta y respuestas: -“Sos descendiente de Milagro Sala? Si-No. “Déjenla libre, putos.” -“Cómo te va en las materias relacionadas con las Ciencias Sociales? Mejor que al presidente, seguro”, dijo un chico.
-“Qué actividades hiciste en tu tiempo libre fuera del horario escolar? Fui a manifestaciones en oposición a este gobierno de mierda”, anotó otro estudiante.
Sus maestros deben estar orgullosos.
Fuente: www.clarin.com
“No al Operativo Aprender”: los maestros contra los exámenes
Jorge Lanata
El volante dice: “Si sos docente, no evalúes. Si sos padre, no permitas que evalúen a tus hijos. Si sos alumno, no tenés obligación de rendir”. El título es “No al Operativo Aprender”.
Entre patético y gracioso: no aprender, podría resumirse.
Vivo en un país donde los maestros están en contra de los exámenes, lo que finalmente define una posición de vida.
No es casual que el kirchnerismo haya intervenido el INDEC, más allá de los negocios de algunos y del acuerdo con los acreedores: los datos acorralan al Relato; saber nos vuelve responsables; algo tenemos que hacer, después, con eso que sabemos.
Una vez que los datos están, aun ignorarlos deja de ser una actitud neutral.
Traté en la radio, en vano, de que algún dirigente gremial de los maestros me explicara los por qué de la oposición: fue en vano, afirmaban estar contra “la estandarización” de la encuesta, y a uno de ellos llegué a preguntarle si estaba en contra de la estadística.
¿Cómo evitar la estandarización de una muestra de un millón y medio de personas? Sólo su oposición dogmática a los exámenes -junto a su irresponsabilidad- puede haberlos llevado, durante la “década ganada”, a aprobar alumnos por sugerencia oficial, como sucedió en nombre de la “inclusión”.
En septiembre de 2014, para dar solo un ejemplo del tono de la época, la docente Cecilia Mariztani fue sancionada por ponerle notas bajas a sus alumnos y le pidieron que modificara sus métodos de calificación. “Nuevo Régimen Académico del Nivel Primario” se llamó el credo: eliminación de los aplazos, las materias previas, la no obligatoriedad de compensar a fin de año y la admisión de chicos que se reincorporan al sistema en el grado correspondiente a su edad biológica, eran algunos de los mandamientos. Si a chicos mal alimentados en sus primeros dos años por la economía le sumamos mal educados en sus siguientes quince años por la educación, el resultado de la ecuación parece obvio: más Asignaciones y menos libertad.
La frutilla del cinismo requiere que todo se haga en nombre de la educación pública: el año pasado las escuelas primarias públicas perdieron otros veinte mil alumnos.
A la hora de los argumentos los grupos que militaron contra el examen enarbolaron -quizá sin saberlo- la teoría Bush de la guerra preventiva o “doctrina de acción positiva” que justificó la invasión a Afganistán: Estados Unidos “creía” que existían armas nucleares y por eso invadió. Después las armas no existían, pero ya era tarde.
Los maestros sostenían que el motivo oculto de la encuesta era “privatizar la educación” o, peor, averiguar cuáles eran las escuelas de peor rendimiento para quitarles su presupuesto. Nada permite presumir que algo así suceda: ni declaraciones publicas, ni privadas, ni la experiencia de los años pasados en la gestión de la Ciudad. Pero como todo dogma, no soporta ser puesto a prueba: se actúa, se reza y ya.
En algo los maestros son consecuentes: ellos mismos no soportan las pruebas y saben que sus calificaciones son, en la mayoría, parte de una ficción. Una vez al año, a la hora de llenar la “Hoja de calificación del personal docente”, todos se llevan a casa su diez automático. ¿Si en una clase de treinta chicos, tres no pasan de grado puede ser normal, pero si no pasan de grado quince, no debería repetir también, el docente que les enseñó?
El único motivo que se me ocurre para estar en contra de un examen es que no se confíe en obtener buenos resultados: miedo a que muestre lo que somos.
Argentina, por ejemplo, tiene más estudiantes universitarios que Brasil. Pero menos graduados. En Brasil se gradúa la mitad de los alumnos que ingresan y en Argentina uno de cada cuatro. En Brasil, como en el resto del mundo, hay examen de ingreso. Acá, la clase media prefiere seguir rezando una mentira: que la clase baja tiene acceso; la clase baja ,en realidad, subvenciona a la media a través de impuestos regresivos como el IVA para que los nenes abandonen la carrera.
La exigencia no es necesariamente, de izquierda o derecha: en Ecuador se ingresa a la universidad aprobando un examen por arriba de los 555 puntos, pero para las carreras de Medicina y Docencia los puntos mínimos son 800.
Brasil, Chile, Cuba, Ecuador, Colombia, Venezuela toman su correspondiente examen.
Pero lo peor del Día de Resistencia Revolucionaria al Examen no fue nada de lo anterior.
-“Choripanero empoderado”, firmó con su letra despareja uno de los alumnos encuestados.
-“Qué sentís cuando usas la computadora en la escuela? Orgullo por la inclusión social”, escribió otro.
Uno inventó pregunta y respuestas: -“Sos descendiente de Milagro Sala? Si-No. “Déjenla libre, putos.” -“Cómo te va en las materias relacionadas con las Ciencias Sociales? Mejor que al presidente, seguro”, dijo un chico.
-“Qué actividades hiciste en tu tiempo libre fuera del horario escolar? Fui a manifestaciones en oposición a este gobierno de mierda”, anotó otro estudiante.
Sus maestros deben estar orgullosos.
Fuente: www.clarin.com
Sunday, October 16, 2016
¡FELIZ DÍA DE LA MADRE!
The following information is used for educational purposes only.
Para TODAS las Mamás (biológicas, del corazón, de la vida y mucho más)
quiero ,con este simple acróstico, dedicarles un breve homenaje, agradecerles
por ese milagro de dar vida (aquí un especial recuerdo a mi mamá), y desearles
un Hermoso día junto a sus hijos y seres queridos. Un gran cariño hacia todas
las Mamás en su día de celebración y recordación
para las/los que guardamos maravillosos momentos
de nuestra vida con ellas. C.M.
Fuente: Google Images/Palabras de la autora/blogger,Clara Moras.
Para TODAS las Mamás (biológicas, del corazón, de la vida y mucho más)
quiero ,con este simple acróstico, dedicarles un breve homenaje, agradecerles
por ese milagro de dar vida (aquí un especial recuerdo a mi mamá), y desearles
un Hermoso día junto a sus hijos y seres queridos. Un gran cariño hacia todas
las Mamás en su día de celebración y recordación
para las/los que guardamos maravillosos momentos
de nuestra vida con ellas. C.M.
Firme
Estoica
Leal
Ingeniosa
Zafa (libre,sin daño)
Dedicada
Íntegra
Amorosa
Dulce
Equilibrada
Lista
Atenta
Maravillosa
Apasionada
Divertida
Responsable
Exigente
Saturday, October 15, 2016
SOCMD/GralInt-TED Talks-Ione Wells: How we talk about sexual assault online
The following information is used for educational purposes only.
Filmed June 2016 at TEDSummit
Ione Wells: How we talk about sexual assault online
We need a more considered approach to using social media for social justice, says writer and activist Ione Wells. After she was the victim of an assault in London, Wells published a letter to her attacker in a student newspaper that went viral and sparked the #NotGuilty campaign against sexual violence and victim-blaming. In this moving talk, she describes how sharing her personal story gave hope to others and delivers a powerful message against the culture of online shaming.
Transcript:
It was April, last year. I was on an evening out with friends to celebrate one of their birthdays. We hadn't been all together for a couple of weeks; it was a perfect evening, as we were all reunited.
At the end of the evening, I caught the last underground train back to the other side of London. The journey was smooth. I got back to my local station and I began the 10-minute walk home. As I turned the corner onto my street, my house in sight up ahead, I heard footsteps behind me that seemed to have approached out of nowhere and were picking up pace. Before I had time to process what was happening, a hand was clapped around my mouth so that I could not breathe, and the young man behind me dragged me to the ground, beat my head repeatedly against the pavement until my face began to bleed, kicking me in the back and neck while he began to assault me, ripping off my clothes and telling me to "shut up," as I struggled to cry for help. With each smack of my head to the concrete ground, a question echoed through my mind that still haunts me today: "Is this going to be how it all ends?"
Little could I have realized, I'd been followed the whole way from the moment I left the station. And hours later, I was standing topless and barelegged in front of the police, having the cuts and bruises on my naked body photographed for forensic evidence.
Now, there are few words to describe the all-consuming feelings of vulnerability, shame, upset and injustice that I was ridden with in that moment and for the weeks to come. But wanting to find a way to condense these feelings into something ordered that I could work through, I decided to do what felt most natural to me: I wrote about it.
It started out as a cathartic exercise. I wrote a letter to my assaulter, humanizing him as "you," to identify him as part of the very community that he had so violently abused that night.
Stressing the tidal-wave effect of his actions, I wrote: "Did you ever think of the people in your life? I don't know who the people in your life are. I don't know anything about you. But I do know this: you did not just attack me that night. I'm a daughter, I'm a friend, I'm a sister, I'm a pupil, I'm a cousin, I'm a niece, I'm a neighbor; I'm the employee who served everyone coffee in the café under the railway. And all the people who form these relations to me make up my community. And you assaulted every single one of them. You violated the truth that I will never cease to fight for, and which all of these people represent: that there are infinitely more good people in the world than bad."
But, determined not to let this one incident make me lose faith in the solidarity in my community or humanity as a whole, I recalled the 7/7 terrorist bombings in July 2005 on London transport, and how the mayor of London at the time, and indeed my own parents, had insisted that we all get back on the tubes the next day, so we wouldn't be defined or changed by those that had made us feel unsafe.
I told my attacker, "You've carried out your attack, but now I'm getting back on my tube. My community will not feel we are unsafe walking home after dark. We will get on the last tubes home, and we will walk up our streets alone, because we will not ingrain or submit to the idea that we are putting ourselves in danger in doing so. We will continue to come together, like an army, when any member of our community is threatened. And this is a fight you will not win."
At the time of writing this letter --
(Applause)
Thank you.
(Applause)
At the time of writing this letter, I was studying for my exams in Oxford, and I was working on the local student paper there. Despite being lucky enough to have friends and family supporting me, it was an isolating time. I didn't know anyone who'd been through this before; at least I didn't think I did. I'd read news reports, statistics, and knew how common sexual assault was, yet I couldn't actually name a single person that I'd heard speak out about an experience of this kind before.
So in a somewhat spontaneous decision, I decided that I would publish my letter in the student paper, hoping to reach out to others in Oxford that might have had a similar experience and be feeling the same way. At the end of the letter, I asked others to write in with their experiences under the hashtag, "#NotGuilty," to emphasize that survivors of assault could express themselves without feeling shame or guilt about what happened to them -- to show that we could all stand up to sexual assault.
What I never anticipated is that almost overnight, this published letter would go viral. Soon, we were receiving hundreds of stories from men and women across the world, which we began to publish on a website I set up. And the hashtag became a campaign.
There was an Australian mother in her 40s who described how on an evening out, she was followed to the bathroom by a man who went to repeatedly grab her crotch. There was a man in the Netherlands who described how he was date-raped on a visit to London and wasn't taken seriously by anyone he reported his case to. I had personal Facebook messages from people in India and South America, saying, how can we bring the message of the campaign there? One the first contributions we had was from a woman called [Nikki], who described growing up, being molested my her own father. And I had friends open up to me about experiences ranging from those that happened last week to those that happened years ago, that I'd had no idea about.
And the more we started to receive these messages, the more we also started to receive messages of hope -- people feeling empowered by this community of voices standing up to sexual assault and victim-blaming. One woman called Olivia, after describing how she was attacked by someone she had trusted and cared about for a long time, said, "I've read many of the stories posted here, and I feel hopeful that if so many women can move forward, then I can, too. I've been inspired by many, and I hope I can be as strong as them someday. I'm sure I will."
People around the world began tweeting under this hashtag, and the letter was republished and covered by the national press, as well as being translated into several other languages worldwide.
But something struck me about the media attention that this letter was attracting. For something to be front-page news, given the word "news" itself, we can assume it must be something new or something surprising. And yet sexual assault is not something new. Sexual assault, along with other kinds of injustices, is reported in the media all the time. But through the campaign, these injustices were framed as not just news stories, they were firsthand experiences that had affected real people, who were creating, with the solidarity of others, what they needed and had previously lacked: a platform to speak out, the reassurance they weren't alone or to blame for what happened to them and open discussions that would help to reduce stigma around the issue. The voices of those directly affected were at the forefront of the story -- not the voices of journalists or commentators on social media. And that's why the story was news.
We live in an incredibly interconnected world with the proliferation of social media, which is of course a fantastic resource for igniting social change. But it's also made us increasingly reactive, from the smallest annoyances of, "Oh, my train's been delayed," to the greatest injustices of war, genocides, terrorist attacks. Our default response has become to leap to react to any kind of grievance by tweeting, Facebooking, hastagging -- anything to show others that we, too, have reacted.
The problem with reacting in this manner en masse is it can sometimes mean that we don't actually react at all, not in the sense of actually doing anything, anyway. It might make ourselves feel better, like we've contributed to a group mourning or outrage, but it doesn't actually change anything. And what's more, it can sometimes drown out the voices of those directly affected by the injustice, whose needs must be heard.
Worrying, too, is the tendency for some reactions to injustice to build even more walls, being quick to point fingers with the hope of providing easy solutions to complex problems. One British tabloid, on the publication of my letter, branded a headline stating, "Oxford Student Launches Online Campaign to Shame Attacker." But the campaign never meant to shame anyone. It meant to let people speak and to make others listen. Divisive Twitter trolls were quick to create even more injustice, commenting on my attacker's ethnicity or class to push their own prejudiced agendas. And some even accused me of feigning the whole thing to push, and I quote, my "feminist agenda of man-hating."
(Laughter)
I know, right? As if I'm going to be like, "Hey guys! Sorry I can't make it, I'm busy trying to hate the entire male population by the time I'm 30."
(Laughter)
Now, I'm almost sure that these people wouldn't say the things the say in person. But it's as if because they might be behind a screen, in the comfort in their own home when on social media, people forget that what they're doing is a public act -- that other people will be reading it and be affected by it.
Returning to my analogy of getting back on our trains, another main concern I have about this noise that escalates from our online responses to injustice is that it can very easily slip into portraying us as the affected party, which can lead to a sense of defeatism, a kind of mental barrier to seeing any opportunity for positivity or change after a negative situation.
A couple of months before the campaign started or any of this happened to me, I went to a TEDx event in Oxford, and I saw Zelda la Grange speak, the former private secretary to Nelson Mandela. One of the stories she told really struck me. She spoke of when Mandela was taken to court by the South African Rugby Union after he commissioned an inquiry into sports affairs. In the courtroom, he went up to the South African Rugby Union's lawyers, shook them by the hand and conversed with them, each in their own language. And Zelda wanted to protest, saying they had no right to his respect after this injustice they had caused him.
He turned to her and said, "You must never allow the enemy to determine the grounds for battle."
At the time of hearing these words, I didn't really know why they were so important, but I felt they were, and I wrote them down in a notebook I had on me. But I've thought about this line a lot ever since.
Revenge, or the expression of hatred towards those who have done us injustice may feel like a human instinct in the face of wrong, but we need to break out of these cycles if we are to hope to transform negative events of injustice into positive social change. To do otherwise continues to let the enemy determine the grounds for battle, creates a binary, where we who have suffered become the affected, pitted against them, the perpetrators. And just like we got back on our tubes, we can't let our platforms for interconnectivity and community be the places that we settle for defeat.
But I don't want to discourage a social media response, because I owe the development of the #NotGuilty campaign almost entirely to social media. But I do want to encourage a more considered approach to the way we use it to respond to injustice.
The start, I think, is to ask ourselves two things. Firstly: Why do I feel this injustice? In my case, there were several answers to this. Someone had hurt me and those who I loved, under the assumption they wouldn't have to be held to account or recognize the damage they had caused. Not only that, but thousands of men and women suffer every day from sexual abuse, often in silence, yet it's still a problem we don't give the same airtime to as other issues. It's still an issue many people blame victims for.
Next, ask yourself: How, in recognizing these reasons, could I go about reversing them? With us, this was holding my attacker to account -- and many others. It was calling them out on the effect they had caused. It was giving airtime to the issue of sexual assault, opening up discussions amongst friends, amongst families, in the media that had been closed for too long, and stressing that victims shouldn't feel to blame for what happened to them. We might still have a long way to go in solving this problem entirely. But in this way, we can begin to use social media as an active tool for social justice, as a tool to educate, to stimulate dialogues, to make those in positions of authority aware of an issue by listening to those directly affected by it.
Because sometimes these questions don't have easy answers. In fact, they rarely do. But this doesn't mean we still can't give them a considered response. In situations where you can't go about thinking how you'd reverse this feeling of injustice, you can still think, maybe not what you can do, but what you can not do. You can not build further walls by fighting injustice with more prejudice, more hatred. You can not speak over those directly affected by an injustice. And you can not react to injustice, only to forget about it the next day, just because the rest of Twitter has moved on.
Sometimes not reacting instantly is, ironically, the best immediate course of action we can take. Because we might be angry, upset and energized by injustice, but let's consider our responses. Let us hold people to account, without descending into a culture that thrives off shaming and injustice ourselves. Let us remember that distinction, so often forgotten by internet users, between criticism and insult. Let us not forget to think before we speak, just because we might have a screen in front of us. And when we create noise on social media, let it not drown out the needs of those affected, but instead let it amplify their voices, so the internet becomes a place where you're not the exception if you speak out about something that has actually happened to you.
All these considered approaches to injustice evoke the very keystones on which the internet was built: to network, to have signal, to connect -- all these terms that imply bringing people together, not pushing people apart.
Because if you look up the word "justice" in the dictionary, before punishment, before administration of law or judicial authority, you get: "The maintenance of what is right." And I think there are a few things more "right" in this world than bringing people together, than unions. And if we allow social media to deliver that, then it can deliver a very powerful form of justice, indeed.
Thank you very much.
(Applause)
Filmed June 2016 at TEDSummit
Ione Wells: How we talk about sexual assault online
We need a more considered approach to using social media for social justice, says writer and activist Ione Wells. After she was the victim of an assault in London, Wells published a letter to her attacker in a student newspaper that went viral and sparked the #NotGuilty campaign against sexual violence and victim-blaming. In this moving talk, she describes how sharing her personal story gave hope to others and delivers a powerful message against the culture of online shaming.
Transcript:
It was April, last year. I was on an evening out with friends to celebrate one of their birthdays. We hadn't been all together for a couple of weeks; it was a perfect evening, as we were all reunited.
At the end of the evening, I caught the last underground train back to the other side of London. The journey was smooth. I got back to my local station and I began the 10-minute walk home. As I turned the corner onto my street, my house in sight up ahead, I heard footsteps behind me that seemed to have approached out of nowhere and were picking up pace. Before I had time to process what was happening, a hand was clapped around my mouth so that I could not breathe, and the young man behind me dragged me to the ground, beat my head repeatedly against the pavement until my face began to bleed, kicking me in the back and neck while he began to assault me, ripping off my clothes and telling me to "shut up," as I struggled to cry for help. With each smack of my head to the concrete ground, a question echoed through my mind that still haunts me today: "Is this going to be how it all ends?"
Little could I have realized, I'd been followed the whole way from the moment I left the station. And hours later, I was standing topless and barelegged in front of the police, having the cuts and bruises on my naked body photographed for forensic evidence.
Now, there are few words to describe the all-consuming feelings of vulnerability, shame, upset and injustice that I was ridden with in that moment and for the weeks to come. But wanting to find a way to condense these feelings into something ordered that I could work through, I decided to do what felt most natural to me: I wrote about it.
It started out as a cathartic exercise. I wrote a letter to my assaulter, humanizing him as "you," to identify him as part of the very community that he had so violently abused that night.
Stressing the tidal-wave effect of his actions, I wrote: "Did you ever think of the people in your life? I don't know who the people in your life are. I don't know anything about you. But I do know this: you did not just attack me that night. I'm a daughter, I'm a friend, I'm a sister, I'm a pupil, I'm a cousin, I'm a niece, I'm a neighbor; I'm the employee who served everyone coffee in the café under the railway. And all the people who form these relations to me make up my community. And you assaulted every single one of them. You violated the truth that I will never cease to fight for, and which all of these people represent: that there are infinitely more good people in the world than bad."
But, determined not to let this one incident make me lose faith in the solidarity in my community or humanity as a whole, I recalled the 7/7 terrorist bombings in July 2005 on London transport, and how the mayor of London at the time, and indeed my own parents, had insisted that we all get back on the tubes the next day, so we wouldn't be defined or changed by those that had made us feel unsafe.
I told my attacker, "You've carried out your attack, but now I'm getting back on my tube. My community will not feel we are unsafe walking home after dark. We will get on the last tubes home, and we will walk up our streets alone, because we will not ingrain or submit to the idea that we are putting ourselves in danger in doing so. We will continue to come together, like an army, when any member of our community is threatened. And this is a fight you will not win."
At the time of writing this letter --
(Applause)
Thank you.
(Applause)
At the time of writing this letter, I was studying for my exams in Oxford, and I was working on the local student paper there. Despite being lucky enough to have friends and family supporting me, it was an isolating time. I didn't know anyone who'd been through this before; at least I didn't think I did. I'd read news reports, statistics, and knew how common sexual assault was, yet I couldn't actually name a single person that I'd heard speak out about an experience of this kind before.
So in a somewhat spontaneous decision, I decided that I would publish my letter in the student paper, hoping to reach out to others in Oxford that might have had a similar experience and be feeling the same way. At the end of the letter, I asked others to write in with their experiences under the hashtag, "#NotGuilty," to emphasize that survivors of assault could express themselves without feeling shame or guilt about what happened to them -- to show that we could all stand up to sexual assault.
What I never anticipated is that almost overnight, this published letter would go viral. Soon, we were receiving hundreds of stories from men and women across the world, which we began to publish on a website I set up. And the hashtag became a campaign.
There was an Australian mother in her 40s who described how on an evening out, she was followed to the bathroom by a man who went to repeatedly grab her crotch. There was a man in the Netherlands who described how he was date-raped on a visit to London and wasn't taken seriously by anyone he reported his case to. I had personal Facebook messages from people in India and South America, saying, how can we bring the message of the campaign there? One the first contributions we had was from a woman called [Nikki], who described growing up, being molested my her own father. And I had friends open up to me about experiences ranging from those that happened last week to those that happened years ago, that I'd had no idea about.
And the more we started to receive these messages, the more we also started to receive messages of hope -- people feeling empowered by this community of voices standing up to sexual assault and victim-blaming. One woman called Olivia, after describing how she was attacked by someone she had trusted and cared about for a long time, said, "I've read many of the stories posted here, and I feel hopeful that if so many women can move forward, then I can, too. I've been inspired by many, and I hope I can be as strong as them someday. I'm sure I will."
People around the world began tweeting under this hashtag, and the letter was republished and covered by the national press, as well as being translated into several other languages worldwide.
But something struck me about the media attention that this letter was attracting. For something to be front-page news, given the word "news" itself, we can assume it must be something new or something surprising. And yet sexual assault is not something new. Sexual assault, along with other kinds of injustices, is reported in the media all the time. But through the campaign, these injustices were framed as not just news stories, they were firsthand experiences that had affected real people, who were creating, with the solidarity of others, what they needed and had previously lacked: a platform to speak out, the reassurance they weren't alone or to blame for what happened to them and open discussions that would help to reduce stigma around the issue. The voices of those directly affected were at the forefront of the story -- not the voices of journalists or commentators on social media. And that's why the story was news.
We live in an incredibly interconnected world with the proliferation of social media, which is of course a fantastic resource for igniting social change. But it's also made us increasingly reactive, from the smallest annoyances of, "Oh, my train's been delayed," to the greatest injustices of war, genocides, terrorist attacks. Our default response has become to leap to react to any kind of grievance by tweeting, Facebooking, hastagging -- anything to show others that we, too, have reacted.
The problem with reacting in this manner en masse is it can sometimes mean that we don't actually react at all, not in the sense of actually doing anything, anyway. It might make ourselves feel better, like we've contributed to a group mourning or outrage, but it doesn't actually change anything. And what's more, it can sometimes drown out the voices of those directly affected by the injustice, whose needs must be heard.
Worrying, too, is the tendency for some reactions to injustice to build even more walls, being quick to point fingers with the hope of providing easy solutions to complex problems. One British tabloid, on the publication of my letter, branded a headline stating, "Oxford Student Launches Online Campaign to Shame Attacker." But the campaign never meant to shame anyone. It meant to let people speak and to make others listen. Divisive Twitter trolls were quick to create even more injustice, commenting on my attacker's ethnicity or class to push their own prejudiced agendas. And some even accused me of feigning the whole thing to push, and I quote, my "feminist agenda of man-hating."
(Laughter)
I know, right? As if I'm going to be like, "Hey guys! Sorry I can't make it, I'm busy trying to hate the entire male population by the time I'm 30."
(Laughter)
Now, I'm almost sure that these people wouldn't say the things the say in person. But it's as if because they might be behind a screen, in the comfort in their own home when on social media, people forget that what they're doing is a public act -- that other people will be reading it and be affected by it.
Returning to my analogy of getting back on our trains, another main concern I have about this noise that escalates from our online responses to injustice is that it can very easily slip into portraying us as the affected party, which can lead to a sense of defeatism, a kind of mental barrier to seeing any opportunity for positivity or change after a negative situation.
A couple of months before the campaign started or any of this happened to me, I went to a TEDx event in Oxford, and I saw Zelda la Grange speak, the former private secretary to Nelson Mandela. One of the stories she told really struck me. She spoke of when Mandela was taken to court by the South African Rugby Union after he commissioned an inquiry into sports affairs. In the courtroom, he went up to the South African Rugby Union's lawyers, shook them by the hand and conversed with them, each in their own language. And Zelda wanted to protest, saying they had no right to his respect after this injustice they had caused him.
He turned to her and said, "You must never allow the enemy to determine the grounds for battle."
At the time of hearing these words, I didn't really know why they were so important, but I felt they were, and I wrote them down in a notebook I had on me. But I've thought about this line a lot ever since.
Revenge, or the expression of hatred towards those who have done us injustice may feel like a human instinct in the face of wrong, but we need to break out of these cycles if we are to hope to transform negative events of injustice into positive social change. To do otherwise continues to let the enemy determine the grounds for battle, creates a binary, where we who have suffered become the affected, pitted against them, the perpetrators. And just like we got back on our tubes, we can't let our platforms for interconnectivity and community be the places that we settle for defeat.
But I don't want to discourage a social media response, because I owe the development of the #NotGuilty campaign almost entirely to social media. But I do want to encourage a more considered approach to the way we use it to respond to injustice.
The start, I think, is to ask ourselves two things. Firstly: Why do I feel this injustice? In my case, there were several answers to this. Someone had hurt me and those who I loved, under the assumption they wouldn't have to be held to account or recognize the damage they had caused. Not only that, but thousands of men and women suffer every day from sexual abuse, often in silence, yet it's still a problem we don't give the same airtime to as other issues. It's still an issue many people blame victims for.
Next, ask yourself: How, in recognizing these reasons, could I go about reversing them? With us, this was holding my attacker to account -- and many others. It was calling them out on the effect they had caused. It was giving airtime to the issue of sexual assault, opening up discussions amongst friends, amongst families, in the media that had been closed for too long, and stressing that victims shouldn't feel to blame for what happened to them. We might still have a long way to go in solving this problem entirely. But in this way, we can begin to use social media as an active tool for social justice, as a tool to educate, to stimulate dialogues, to make those in positions of authority aware of an issue by listening to those directly affected by it.
Because sometimes these questions don't have easy answers. In fact, they rarely do. But this doesn't mean we still can't give them a considered response. In situations where you can't go about thinking how you'd reverse this feeling of injustice, you can still think, maybe not what you can do, but what you can not do. You can not build further walls by fighting injustice with more prejudice, more hatred. You can not speak over those directly affected by an injustice. And you can not react to injustice, only to forget about it the next day, just because the rest of Twitter has moved on.
Sometimes not reacting instantly is, ironically, the best immediate course of action we can take. Because we might be angry, upset and energized by injustice, but let's consider our responses. Let us hold people to account, without descending into a culture that thrives off shaming and injustice ourselves. Let us remember that distinction, so often forgotten by internet users, between criticism and insult. Let us not forget to think before we speak, just because we might have a screen in front of us. And when we create noise on social media, let it not drown out the needs of those affected, but instead let it amplify their voices, so the internet becomes a place where you're not the exception if you speak out about something that has actually happened to you.
All these considered approaches to injustice evoke the very keystones on which the internet was built: to network, to have signal, to connect -- all these terms that imply bringing people together, not pushing people apart.
Because if you look up the word "justice" in the dictionary, before punishment, before administration of law or judicial authority, you get: "The maintenance of what is right." And I think there are a few things more "right" in this world than bringing people together, than unions. And if we allow social media to deliver that, then it can deliver a very powerful form of justice, indeed.
Thank you very much.
(Applause)
Subscribe to:
Posts (Atom)
La vejez. Drama y tarea, pero también una oportunidad, por Santiago Kovadloff
The following information is used for educational purposes only. La vejez. Drama y tarea, pero también una oportunidad Los años permiten r...
-
The following information is used for educational purposes only. 7 Self-Care Rituals That Will Make You a Happier and Healthier Perso...
-
The following information is used for educational purposes only. Transcript: ...
-
The following information is used for educational purposes only. ChatGPT, una introducción realista ChatGPT parece haber alcanz...