Monday, April 29, 2019

OPINIÓN | EDITORIALES-Guardacostas, custodios de nuestra soberanía

The following information is used for educational purposes only.


OPINIÓN | EDITORIALES

Guardacostas, custodios de nuestra soberanía

Los argentinos deberíamos valorar y equipar mucho mejor a quienes, con su incansable e irreemplazable labor, protegen nuestro patrimonio marítimo

29 de Abril de 2019

Luego de que la Comisión de Límites de la Plataforma Continental (Copla) de las Naciones Unidas (ONU) avalara el pedido efectuado por la Argentina en 2009, la activa y sostenida gestión de distintas administraciones nos permitió sumar, en 2017, 1633 kilómetros cuadrados más a nuestra plataforma continental, accediendo al ciento por ciento de nuestro reclamo. Dicha plataforma, abierta a la pesca internacional, se extiende a continuación de la llamada Zona Económica Exclusiva (ZEE), que va desde la línea de base de nuestra costa hasta la milla 200, esto es, a casi 400 kilómetros de la orilla, y donde cada país tiene soberanía sobre los recursos naturales del suelo y del subsuelo, la mayoría aún por descubrir.

Ante la ausencia de regulaciones aplicables en alta mar que contemplen ciclos biológicos, temporadas o tamaños, a medianoche la actividad pesquera ilegal en la plataforma es incansable; tanto que se habla de auténticas "ciudades flotantes", por la cantidad de luces de buques que allí se observan. La luz incandescente les sirve para atraer moluscos. Se calcula que, anualmente, unos 700 pesqueros trabajan en la zona, muchos incluso apagan sus sistemas para no ser divisados visualmente en infracción o por radares, pero no pueden escapar de la tecnología que brinda la detección satelital. Muchos de ellos son barcos chinos del tipo denominado poteras, arrastreros o palangreros, según sea, que emplean a miles de marineros extranjeros, habitantes del mar por largos meses, sobre quienes se ha denunciado que trabajan en condiciones infrahumanas, cercanas a la esclavitud. Detrás de la mejor pesca, infringen constantemente el límite de nuestra ZEE y violan la ley federal de pesca, sin que nuestro país tenga los medios para impedirlo.



La tarea de la Prefectura, a bordo del buque Prefecto Derbes, para evitar la pesca ilegal entre Puerto Madryn y la milla 200 Fuente: LA NACION

Apenas cinco embarcaciones argentinas, en campañas de entre 15 y 40 días, realizan el patrullaje de 24 horas que permite detectar y detener a estas embarcaciones que ingresan en nuestra ZEE desde la segunda de las mayores zonas de pesca no regulada ni declarada del planeta, en la que se da un crecimiento del 5% de actividad cada año. Cuatro soldados del escuadrón Albatros, armados con fusiles automáticos livianos, integran cada tripulación.

Deben nuestros centinelas del mar anticiparse y evitar alertar a los buques pesqueros mercantes chinos, coreanos, rusos o españoles para poder capturarlos. Localizan al infractor mediante radar, lo identifican y proceden a comunicarse por radio, enviando también señales luminosas y sonoras, En una persecución que se prolongó por un día y medio, tras haber realizado más de 300 llamados al buque chino Yan Yuan Yu y antes de que este lograra darse a la fuga, el 14 de marzo fue capturado y hundido, habiendo debido rescatar del agua a la tripulación. Desde el año 2000, la Prefectura lleva más de 70 buques capturados en nuestra ZEE.

La codiciada riqueza ictícola comprende merluza de distintos tipos, caballa, langostinos, corvina, lenguado, abadejo, mero, cazón y gatuzo, entre otras muchas especies que integran las toneladas de carga que desplazan ilegalmente los buques hacia sus países de origen. Zonas pobladas por especies con alto valor comercial son territorios apetecibles para la pesca indiscriminada, alejada del manejo sustentable y respetuoso de la biodiversidad, por la imposibilidad de aplicar normas internacionales vigentes. La Organización para la Protección de los Recursos del Atlántico Sudoccidental (Opras) es una ONG que promueve los cambios que muchas veces la geopolítica no impulsa debidamente. No podemos descuidar nuestros recursos ni desproteger la industria pesquera nacional sin reclamar acuerdos de cooperación que sirvan para regular y evitar una dramática depredación de los recursos.

La presencia de los abnegados y valientes guardacostas no sabe de horarios ni de condiciones climáticas. Orgullosos de defender y proteger nuestro patrimonio, los argentinos debemos valorarlos, equiparlos mucho mejor y agradecerles su irreemplazable trabajo en aquella frontera para que la riqueza que custodian no se nos escurra en manos de flotas extranjeras que vulneran nuestra soberanía. Mucho resta aún por hacer para acompañar esta gesta desde distintos sectores.


Fuente:https://www.lanacion.com.ar/opinion/editoriales/guardacostas-custodios-de-nuestra-soberania-nid2242539

DIGINNOV/GINT-The First Law of Digital Innovation, by George Westerman

The following information is used for educational purposes only.


The First Law of Digital Innovation


George Westerman





April 08, 2019


By now, most of us have heard of Moore’s law. The “law,” coined more than 40 years ago by Intel cofounder Gordon Moore, has helped to shape the pace of innovation for decades. Originally focusing on the computing power of semiconductor chips, Moore stated in 1975 that the transistor density doubles roughly every two years. As technologies and computing architectures have changed, the doubling time and the performance measure have changed, but the nature of the law has not. Computing power grows exponentially. This has been true for digital technologies in general, from processors to networking to DNA sequencing. While people are now predicting the end of Moore’s law, exponential growth in computing power continues as new technologies and architectures emerge.

The relentless march of technology is very good for companies that sell technology, and for the analysts, journalists, and consultants who sell technology advice to managers. But it’s not always so good for the managers themselves. This is because Moore’s law is only part of the equation for digital innovation. And it’s a smaller part than many people imagine.

I’d like to propose a new law. It’s one I know to be true, and one that too many people forget. We can call it the first law of digital transformation. Or we can just call it George’s law. It goes like this:

Technology changes quickly, but organizations change much more slowly.

This law is the reason that digital transformation is more of a leadership challenge than a technical one. Large organizations are far more complex to manage and change than technologies. They have more moving parts, and those parts, being human, are much harder to control. Technology systems largely act according to their instructions, and technology components largely do what they are designed to do. But human systems are very different. While it’s relatively straightforward to edit a software component or replace one element with another, it’s nowhere near as easy to change an organization.

Organizations are a negotiated equilibrium between the needs of owners (or leaders) and the needs of individuals. This equilibrium is difficult to attain and even more difficult to change. Just think of the last time you launched a major new transformation in your business. Or when your boss did. Simply saying that you’re transforming doesn’t make it so. You need to convince people that they need to change, and then you need to help them change in the right direction. If you do it right, you get them excited enough that they start to suggest ways to make even better changes.

Help Your Organization Transform

Because digital transformation is more of a leadership challenge than a technical one, it’s essential to focus managerial attention on people’s desire to change and the organization’s ability to change. You want to convert digital transformation from a project into a capability — from a time-limited investment into an enduring digital innovation factory. To do so, focus on three major areas:

Change the vision. Most people don’t like to change. If you’re driving change, you need to help others see the benefits. That’s where a transformative vision comes in. Help people see a reason to change, and how they can play a role in making it happen. Without a clear and compelling vision, people will provide, at best, only lukewarm support. Most will choose to ignore the change, hoping it will go away. Some may even choose to fight it, either overtly or, more often, covertly.

Great visions paint a clear picture of a better company — one that is better for customers and employees. You need to help people understand why the new vision is better than the old way of working. And you need to help employees understand how they fit in the transition process and the future state. If you’ve set the stage properly, they may even start suggesting ways to make the vision a reality.

For example, when DBS Bank had the lowest customer satisfaction ratings among Singapore’s top five banks, the CEO and CIO set out to radically change the situation. They created a vision to “make banking joyful,” and promoted the vision widely. They set a goal to save customers 100 million hours of wait time by fixing processes and introducing features that would help at points where customers traditionally had to wait. And they opened channels so that employees felt empowered to suggest any innovation that would reduce wait time. A few years later, DBS had saved more than 200 million hours of customer wait time and was on its way to becoming the highest-rated bank by customer satisfaction.

Change the legacy platform. While technology doesn’t create value on its own, it can certainly inhibit value when done poorly. In many organizations, the legacy platforms — messy business processes and tangled webs of outdated and intertwined IT systems — are the chief source of inertia and cost for digital transformation. It’s tough to create a unified customer experience, for instance, if your systems don’t provide a unified view of the customer. It’s tough to launch new analytics-based business models if your data is messy or your processes are not integrated.

Thus, to make new digital innovations successful, companies must often invest in fixing their older technologies. This can be very tough. It often requires launching a new platform that can handle the requirements of digital while linking to the old systems. And it usually requires cleaning up chaotic systems spaghetti that can slow changes and increase risk. Building a data warehouse or data lake can be a decent short-term solution, but at some point you’ll need to address issues in the legacy platform itself.

Nearly every digital master we studied — from Indian manufacturer Asian Paints to Australian-British mining company Rio Tinto to DBS — invested in a legacy systems cleanup either before or during other waves of transformation. Fixing the legacy platform creates business processes that are leaner and faster than before, and generates options to power wave after wave of new digital innovation.

Change the way the organization collaborates. The difficulties GE faced in transforming to a digitally powered internet of things dynamo weren’t due to technology. GE developed deep expertise in IoT and machine learning, and launched some fascinating new ideas such as digital twins. However, GE was not able to solve the problem of working across the silos between its digital and traditional units. This, among other organizational challenges, impeded product development. It also deeply challenged the selling process. In 2017, with digital sales growing too slowly, sales in traditional units lagging, and digital investment continuing at a high rate, CEO Jeffrey Immelt resigned. The company has struggled to regain its former levels of profitability and growth. (Immelt recently reflected on the trials of transformation in MIT Sloan Management Review.)

Organizational challenges with digital transformation are not unique to GE, or even to this year. They happen in every industry and have happened for years, even back to the early days of e-commerce. In many companies, traditional and digital staffs do not work well together. Incentive issues can cause people in traditional units to focus more on themselves than on digital innovations or valuable digital/traditional hybrids. While a powerful vision can start to build momentum, organization and incentive issues can stop transformation in its tracks. Fixing these organizational issues takes repeated communication, clear incentives, and sometimes, visible action to discipline people who are working in the wrong direction.

An important internal collaboration to address is between IT and the rest of the business. Early in our digital transformation research, many leaders argued that technology was moving too quickly for their IT units to keep up, and they chose to pursue digital without including their IT leaders. That was a mistake. The best companies have found ways for business and IT leaders to work closely together in driving transformation. IT units became faster and more business savvy, digital units found ways to work with IT and not around it, and business leaders started including both in strategic decision-making. Even when companies built a separate digital division, the IT and digital leaders in the best companies collaborated smoothly to drive transformation.

In my early days as an engineer, I used to joke that organizations would work so much better if we could just remove the people. After decades of research and practice in organizations, I’m much happier with having people in the organizations. People make organizations go. But they can also make organizations go too slowly. Or in the wrong direction.

This doesn’t have to be the case. People don’t have to be a source of inertia. In fast-moving born-digital companies that we all know, people are a source of continual innovation and energy. They know where the company is going, have a healthy dissatisfaction with the way things work, and constantly suggest ways to make things better. This can be the case in every company, whether born digital or not. But it takes more than just words — and more than just cool new technology — to create the kind of digital transformation your company needs.

Here are your jobs, leaders: Create a compelling vision of the digitally powered future. Foster conversations so that people can understand the vision and what it means for them. Clean up legacy situations — information systems, work rules, incentives, management practices, or dysfunctional functions — that slow or prevent change. Start some pilots to build momentum. Create conversations to spur different parts of the company to use, and build on, the innovative work of others. You’ll be creating a capability to transform, not just a set of transformation projects. When that happens, digital transformation never stops. Instead, it becomes an ongoing process in which employees and their leaders continually identify new ways to change the company for the better.



ABOUT THE AUTHOR

George Westerman (@gwesterman) is a senior lecturer with the MIT Sloan School of Management, faculty director for workforce learning in the MIT Jameel World Education Laboratory, and coauthor of the award-winning book Leading Digital: Turning Technology Into Business Transformation, published by Harvard Business Review Press.


Source:https://sloanreview.mit.edu/article/the-first-law-of-digital-innovation/www.linkedin.com/Recommended by C.F.

Sunday, April 28, 2019

GINT/COM/SOC-Vecinos de Villa Devoto montaron una huerta en las veredas del barrio, por Mariano Jasovich

The following information is used for educational purposes only.



Vecinos de Villa Devoto montaron una huerta en las veredas del barrio

Mariano Jasovich

22 de Abril de 2019


Un grupo de vecinos decidió volver a sacar las mesas y las sillas a sus veredas, como hacían los porteños en décadas pasadas. Sucede en Villa Devoto, en una zona en la que la ciudad empieza a confundirse con el conurbano. Calles con poco tránsito, veredas anchas, pasajes silenciosos y árboles centenarios. Pero eso no es todo, los vecinos que se juntan en la intersección de Nazarre y Marcos Paz, decidieron formar Veredas Vivas, una comunidad que llenó la zona de plantas nativas y comestibles.

Pablo Pistocchi, uno de los impulsores del grupo, cuenta a LA NACIÓN que todo empezó hace unos cuatro años, cuando puso unas enredaderas en la puerta de su casa. "Empezaron a pasar vecinos preguntando o elogiando las plantas -explica-. Y así se fue formando la red. Siempre pensé que contra la inseguridad es mejor salir a la vereda y no estar escondido detrás de una ventana con rejas o filmado por las cámaras de la Ciudad".




Los vecinos participan del cultivo Fuente: LA NACION - Crédito: Pablo Pidal

Se reúnen todos los fines de semana al atardecer. Hay mate cebado de una pava, como en los viejos tiempos, y pan casero y budines aportados por los integrantes de Veredas Vivas. Hay historias de vecinas jubiladas que "vuelven a vivir" gracias al proyecto. Es el caso de una jubilada de 86 años con problemas de audición, que volvió a salir a la calle para regar los canteros de Veredas Vivas que están frente a su casa. Forman la comunidad también una profesora de yoga que sumó alumnos entre la comunidad y un plomero y gasista cubano, Carlos Márques, que se encarga de los arreglos de los canteros.

Facundo Romano, otro vecino de Devoto, explica: "Las ciudades como Buenos Aires se ubicaron en las mejores tierras. Tienen humedad y están cerca del río, pero tapamos todo con cemento y las plantas no pueden crecer".

Huerta al paso

El primer pilar de Veredas Vivas era ofrecer plantas comestibles a la comunidad. "La idea es que las personas en vez de revolver el contenedor, puedan arrancar un zapallo o alguna fruta de un árbol", se entusiasma Pistocchi. Veredas Vivas, con permiso de cada vecino, usó varios canteros vacíos para plantar zapallo, repollo, tomate cherry y ajíes en esta primera etapa. "Igual a cada vecino que tiene un cantero en la puerta lo impulsamos a cuidar las plantas, regarlas y avisarnos si hay algún problema", explica Romano.




Los vecinos se entusiasman con la aparición de flores y aves Fuente: LA NACION - Crédito: Pablo Pidal

"Tenemos que volver a la época en la que los porteños lográbamos abastecernos con huertas, árboles frutales y hasta animales sin pasar por los supermercados", explica Romano. Y Pistocchi es un buen ejemplo, ya que tiene gallinas en el fondo de su casa que lo proveen de huevos.

En una recorrida de apenas 400 metros (una vuelta a la manzana), Los vecinos de Veredas Vivas pueden obtener paltas, limones, oregano, menta, nueces de pecan de un nogal de casi 100 años, zapallos y ajíes picantes.

Colibríes y mariposas

Antes de que se extendiera la Buenos Aires de cemento, ya estaban en esta zona las plantas nativas, que a su vez generaban todo un ecosistema de insectos propios de esta zona de la región pampeana. Beatriz Freire es especialista en plantas nativas y colabora con el grupo de Devoto. "La idea es que podamos volver a contar con ese tipo de vegetación propia de la zona -explica la mujer-. Eso hace que vuelvan las mariposas o colibríes tan comunes antes".



Pablo Pistocchi, junto a sus vecinos Pablo y Facundo Fuente: LA NACION - Crédito: Pablo Pidal

Para el proyecto, Veredas Vivas cuenta con el apoyo técnico de las ONG "El Renacer de la Laguna", que funciona en el predio de Agronomía, y la Red de Viveros de Plantas Nativas. "Nuestra idea es sumar un granito de arena a mejorar la biodiversidad de la ciudad -relata Pistocchi-. Lo ideal es que existan en cada manzana o barrio grupo de vecinos que hagan una actividad parecida a la de Veredas Vivas".

Por ejemplo, en una de las veredas crece un "sen del campo" que tiene unas flores amarillas. "La mariposa limoncito se alimenta cuando es oruga de esa planta con exclusividad. Por lo tanto, no puede existiría sin esa planta", cuenta Freire.



Vecinos de todas las edades colaboran en la huerta callejera Fuente: LA NACION - Crédito: Pablo Pidal

Según el último censo en las veredas porteñas hay 370.916 árboles. Lideran el ranking tres especies exóticas: el fresno americano, el plátano y el ficus. "Cuando se diseñó la ciudad la idea era parecerse a París, por eso se introdujeron ese tipo de vegetación -comenta Pistocchi-. Sería bueno reemplazarlos en lo posible por especies autóctonas como el timbó, el clásico ombú o la anacahuita".

"A veces parecemos ´medio locos´, porque cuando vemos una oruga en una hoja, una mariposa o un abejorro que se acerca en una planta, en vez de asustarnos o matarlo, nos ponemos contentos -sostiene Pistocchi-. Tampoco usamos químicos para cuidar a las plantas. Lo hacemos con sustancias naturales que sirven para cuidarlas".

Junto a un frío poste de la luz crece una planta trepadora, La Pasionaria. En el verano suele tener flores grandes que todavía pueden verse en este otoño caluroso. Sus frutos de color naranja atraen insectos y colibríes. "Cuando vimos los primeros pajaritos de color verde que se acercaban a las flores aleteando fue como una gran emoción -recuerda Pistocchi-. Hace muchos años que no se veían colibríes en Buenos Aires".


Fuente:https://www.lanacion.com.ar/sociedad/vecinos-villa-devoto-montaron-huerta-veredas-del-nid2240501

OPINIÓN | EDITORIAL-Ricos de Forbes, pobres de Calcuta

The following information is used for educational purposes only.


OPINIÓN | EDITORIALES

Ricos de Forbes, pobres de Calcuta

Sin un capitalismo democrático y genuino, la Argentina será un campo propicio para el asalto sindical o corporativo y la expropiación fiscal

28 de Abril de 2019

Cuando Karl Marx publicó el primer volumen de El capital (1867), no advirtió dos errores, también "capitales", que solo el tiempo pudo verificar. Ni el capitalismo se desmoronaría en su estadio maduro ni su potencia para generar riqueza se mantendría luego de la colectivización de los medios de producción.

Ni Vladimir Lenin ni José Stalin tuvieron paciencia para comprobar la profecía marxista y, a costa de muertos y deportados, aceleraron la industrialización de su país, hasta que Mikhail Gorbachov, con su glasnost y su perestroika, corrió el velo de la amarga realidad soviética (1985-1991).

A la muerte de Mao Tsé-tung (1976), Deng Xiaoping advirtió que el pueblo chino, creativo e industrioso, estaba "para más" que empuñar el Libro Rojo y corear frases huecas. Observó la fosilización de la economía china; el anquilosamiento de las 15 repúblicas soviéticas; la petrificación de las democracias populares de Europa Oriental; la tragedia teatral de Kim Il-sung; la revolución sin frutos de Fidel Castro y el socialismo autogestionario del mariscal Tito, concluyendo, con milenaria sabiduría oriental, que no hay creación posible de riqueza sin el incentivo del propio interés.

Liberó así al "genio de la botella" que el filósofo de Treveris había encerrado un siglo antes y que sus discípulos aún aprisionaban bajo el corcho, a sangre y fuego. Desde entonces, China sacó de la pobreza a 800 millones de personas, haciendo caso omiso de la plusvalía, la alienación, la explotación y la falsa conciencia. No lo hizo mediante la "justa distribución de la riqueza", pues en China no había ricos, sino mediante la creación de riqueza nueva, acumulando capital sin violencia gracias a la magia de los mercados, aunque en un sistema totalitario. Sin invocarlo, adoptó a Adam Smith mientras continúa predicando a Karl Marx.

¿Y la Argentina? Nuestro país mantiene, desde hace 70 años, un sistema sui generis de capitalismo sin capitales, al menos dentro de sus fronteras. En lugar de revertir la fuga de ahorros al exterior con buenas instituciones, moderación fiscal y seguridad jurídica, el populismo reconoció su propia torpeza, decretando que la Argentina "carece de capitales "y pergeñando mecanismos para extraer ahorro de la población (impuestos, inflación, altos precios) y proveer recursos a los campeones y expertos en mercados regulados, dejando a la intemperie a pymes y pequeños empresarios.

Hemos tenido regímenes de promoción (abusados), bancos de desarrollo (fundidos), empresas públicas cooptadas (YPF, Gas del Estado, Segba, Luz y Fuerza, Entel, YCF, Fabricaciones Militares, entre otras), avales del Tesoro (caídos e impagos), mercados cautivos, prefinanciaciones subsidiadas, exportaciones "tuneadas" por el BCRA, "compre nacional" extorsivo, obras públicas arregladas, anticipos y ajustes escandalosos a contratistas habilidosos.

Luego de los discursos, los himnos y los cortes de cintas, esas generosas plusvalías volvían a cuentas del exterior (como enseñan los manuales) mediante la facturación "inflada" de equipamiento, falsos honorarios de consultoras, pagos de licencias "dibujadas" o regalías ficticias. Con múltiples esquemas de triangulación para aprovechar los controles de cambios y contabilizar ganancias offshore. Solo así se explican los US$300 mil millones que los argentinos guardan en el exterior mientras la pobreza alcanza a un tercio de la población. Durante décadas, el Estado ha extraído ahorros de los trabajadores, con una suerte de estalinismo light, para financiar la fuga de capitales. Hemos creado ricos de Forbes y pobres de Calcuta en nombre del desarrollo nacional y la justicia social.

Ante esa paradójica "falta de capitales" la Argentina adoptó una mentalidad extractiva, oportunista y rapaz. Como no ingresan capitales sin garantías estatales ni se ahorra en pesos, los gobiernos han estado siempre "a la caza" de oportunidades para apropiarse de recursos sin revertir las causas del problema, como sí se atrevió a hacerlo Deng en China.

Siempre se espera una solución que evite los cambios: así fue con las "joyas de la abuela" (Menem), el octavo default (Rodríguez Saá) y la confiscación de las AFJP (Boudou). Desde las retenciones a la soja hasta la perspectiva de Vaca Muerta (como antes fuera Loma de la Lata) o el litio, "oro blanco" del NOA. El Estado siempre hurga en bolsillos y revuelve cajones para superar las estrecheces de vivir con lo nuestro y no como el genio de la botella lo haría posible.

Un país que aspira a brindar mejor educación, ofrecer medicamentos aun para enfermedades no frecuentes, ocuparse de los discapacitados, otorgar becas, reinsertar a los excluidos, reeducar a los adictos, reconvertir las villas o prevenir los abusos no puede vivir de la caza y de la pesca.

Es necesario financiar los derechos de tercera generación con un flujo constante y creciente de riqueza, dejando la estrategia extractiva como recurso suplementario. Sin capitalismo democrático y genuino, la Argentina será un campo yermo propicio para el asalto sindical o corporativo y la expropiación fiscal.

Tiene que haber un cambio dramático que tuerza el curso de los ahorros, para que ingresen por la capilaridad de toda la economía y no solamente por las ventanillas de los ministerios, los formularios y los decretos.

En una Argentina confiable, los inversores competirían en busca de "unicornios" juveniles o veteranos con vocación global, para aportar capital a bajo costo. Desde el negocio más pequeño hasta la empresa más grande, requieren capital abundante y barato. Verdad de Perogrullo.

Sin embargo, no se oye en las filas del populismo a nadie que proponga liberar al genio de la botella. Nacidos y criados en un sistema perverso, solo conciben volver sobre alquimias pretéritas que tienen sus reactivos secos y sus fuegos apagados.

Ninguno propone "crear confianza", "seguridad jurídica" o "reformas estructurales", como si pudiéramos volver a 1946 o a 2001 sin que la irreversibilidad del tiempo nos arroje en la Venezuela actual. Eso se cae de Maduro.


Fuente:https://www.lanacion.com.ar/opinion/editoriales/ricos-de-forbes-pobres-de-calcuta-nid2242400

SOCIEDAD | REAL ACADEMIA ESPAÑOLA-¿Qué quiere decir dou y skerry? Un glosario para entender a los centennials, por Federico Acosta Rainis

The following information is used for educational purposes only.


SOCIEDAD | REAL ACADEMIA ESPAÑOLA

¿Qué quiere decir dou y skerry? Un glosario para entender a los centennials


Los adolescentes de hoy se apropian de expresiones de los cantantes del momento y de los influencers; los especialistas reconocen el fenómeno como un proceso de identidad habitual que atraviesan todas las generaciones Crédito: Shutterstock

Federico Acosta Rainis

28 de Abril de 2019

" Dooou lo decís cuando hay algo que está piola. El ndeah de ahora es parecido al ahre, que es como irónico, y el skerry se usa en vez del skere, pero en broma", dice Yasmin, de 13 años, con total naturalidad. Así, pareciera revelar el código misterioso de una sociedad secreta y de alguna manera lo hace: muchas de las expresiones que hoy usan los chicos configuran un territorio desconocido para los adultos.

A metros de la puerta del Colegio Nacional de Buenos Aires, Yasmin y sus compañeras Olivia y Renata, también de 13 años, comparten con LA NACIÓN sus formas de ver el mundo y, sobre todo, de contarlo. Recién terminaron las clases en el turno tarde y la calle Bolívar bulle de colores, chistes y gritos, como la mayoría de los espacios transitados por adolescentes.

"Le expliqué muchas veces a mi papá el ahre y él no lo logra entender. Me dice: '¿Qué son esos lenguajes?' y mi mamá se muere de risa", cuenta Yasmin. A su lado, Olivia agrega: "Mi papá más o menos lo entiende, pero lo usa mal, lo dice como si fuera un chiste, como 'Arre, arre, caballito'". Las tres se ríen con ganas.


(Infografía con glosario:Ver artículo fuente)


Ahre es una de las primeras expresiones que se pusieron de moda entre los adolescentes gracias a las redes sociales. La popularizaron hace una década los extintos floggers, una tribu urbana que compartía sus fotos en fotologs. El ahre se agrega al final de una frase y denota ironía o polisemia sobre lo que se acaba de decir.

Hoy, con muchísimas plataformas para compartir contenido, los nuevos términos se multiplican y pasan rápidamente al habla de los chicos. Cuando algún gamer, youtuber, streamer o influencer inventa algún término que impacta, es probable que se vuelva viral. Es el caso de Martín Pérez Disalvo, más conocido como "Coscu", un platense de 27 años que hace siete empezó a transmitir y comentar sus partidas de League of Legends (LoL), un conocido juego online.

Con su estilo frontal, verborrágico y desfachatado y un millón y medio de seguidores en YouTube, "Coscu" popularizó palabras como dou, onomatopeya usada para celebrar algo bueno y ndeah, que se pronuncia "naaaah" e indica un sarcasmo. Es también el inventor de la costumbre de incorporar el sufijo "vich" al final de los adjetivos, como picantovich, para picante, o durovich, para duro. Palabras y modos que hoy usan miles de adolescentes en el país.

Los chicos toman además otros contenidos desde Internet. "Pasamos literalmente los memes del celular a la vida real", dice Yasmin. Se trata de imágenes acompañadas por una frase que representan situaciones concretas y se vuelven virales. Hay uno muy conocido que tiene la foto de un gato que reza "¿me perdonas?". Los chicos lo adoptaron y lo pronuncian en español neutro.

Algunas expresiones provienen de la música. Estar en "modo diablo" significa estar loco, con mucha energía o muy activo; la frase toma el nombre del grupo del trapero Duki, uno de los músicos argentinos más importantes del género. En un video viral, dos chicos católicos propusieron reemplazarlo por "modo Cristo"; la frase prendió entre los adolescentes, pero usada de forma irónica.

Abundan también los términos anglosajones castellanizados. Son muy populares same, que significa "igual" y se usa para reflejar que al hablante le pasa lo mismo o hará lo mismo que dijo otro, y cringe, que en inglés quiere decir avegonzarse, y se suele usar para expresar vergüenza ajena: "Me das cringe". Ambos se pronuncian como si estuvieran escritos en español.

¿Qué dicen?

Estas creaciones que están a años luz de la Real Academia Española provocan entre padres y docentes reacciones encontradas, que van de la risa al rechazo, pasando a veces por la impotencia o la incomprensión.

"Yo me siento joven hasta que entro al aula", relata con humor María Victoria Vincova, docente de Lengua y Literatura en colegios públicos y privados de la Capital. Aunque tiene 26 años y le lleva menos de diez a muchos de sus estudiantes, reconoce que, en general, se queda afuera cuando ellos utilizan su jerga. "Los chicos usan el habla para marcar una diferencia y van cambiando todo el tiempo. Se nota mucho la influencia de las redes sociales y la música".

Para los especialistas es un fenómeno identitario habitual. "Es muy importante a esa edad tener un código de expresión propio, distinto al de la generación de la que nos queremos separar. Una vez pasada esa etapa, se mantienen más dentro del código mayoritario de los adultos", explica Santiago Kalinowski, lingüista y director del Departamento de Investigaciones Lingüísticas y Filológicas de la Academia Argentina de Letras. Sobre la influencia del mundo virtual, señala: "Las redes sociales son muy novedosas tecnológicamente, pero desde el lado del léxico es lo mismo: en otra época fue la televisión, el rock, el tango o el lunfardo. Es el lugar del que salen las palabras que provocan esa sensación de grupo".

El lingüista e investigador de la Universidad de Buenos Aires , Federico Testoni, explica que algunos adultos consideran erradas las creaciones de los chicos porque "escapan de la norma". Pero son una demostración de que la lengua sigue viva. "Las lenguas solo existen en las personas que las usan: los diccionarios son el cementerio de las palabras porque cuando llegaron ahí, su uso ya es viejo", indica. Kalinowski completa: "Hay un estigma histórico de que los jóvenes hablan peor y degradan la lengua. Los que hoy encienden alarmas, provocaron esa misma percepción en sus mayores; si cada generación la degradara, tendríamos una lengua de cien palabras".

Todes

Aunque no es usado solo por los jóvenes, el lenguaje inclusivo, que propone recurrir a la letra "e" para evitar que la generalización del masculino y visibilizar las diversidades sexuales, tiene cada vez más llegada entre los adolescentes, que aportan distintas opiniones sobre el tema.

"Estoy a favor-cuenta Renata-. En una reunión con gente que no sé con qué género se siente identificada, lo usaría". Su compañera Olivia disiente: "No lo uso, porque me enseñaron que el masculino plural incluye a los dos géneros. Les delegades en clase dicen todes como si todes fuéramos no binarios. Creo que es algo más de los colegios públicos".

Los especialistas coinciden en que el inclusivo aparece en espacios donde las discusiones sobre género y política proliferan. Según Testoni, "las variaciones lingüísticas normalmente van desde abajo para arriba, pero en este caso es al revés: hay una decisión política de posicionamiento y después entra en uso, y por eso genera más ruido cognitivo". Kalinowski señala que es un fenómeno que "los jóvenes han tomado con mayor intensidad", pero no se puede imponer ni prohibir: "La lengua está para aprovecharla y los hablantes tenemos derecho a crear y utilizar ese recurso".

Las palabras que trascienden las generaciones

Bardo/bardear

Lío, problema o pelea; provocar un lío, portarse mal

Flashear/flashar

Delirar, alucinar. También, enamorarse de alguien

Piola

Una situación que está bien o alguien simpático, astuto o inteligente

Gato

Un clásico que regresó.

Se usa en serio o en broma, para hablar despectivamente de alguien

Manija

Dícese de quien está ansioso, acelerado o con muchas ganas de hacer algo

Ortiba

Del lunfardo, proviene de "batidor", aquel que delataba frente a la policía. Refiere despectivamente a quien es aburrido, no colabora o está de mal humor

Banda

Un montón, mucho

Chamuyar/chamuyo

Mentir deliberadamente para convencer. Se usa también como sinónimo de seducir


Fuente:https://www.lanacion.com.ar/sociedad/que-quiere-decir-dou-skerry-un-glosario-nid2242353

SOCIEDAD | CIENCIA-Un joven argentino fue elegido por Harvard y el MIT como uno de los 100 líderes del futuro

The following information is used for educational purposes only.


SOCIEDAD | CIENCIA

Un joven argentino fue elegido por Harvard y el MIT como uno de los 100 líderes del futuro



Tiene 21 años y viajará a Boston para recibir el premio; ya había sido noticia cuando cautivó a Angela Merkel en el G-20 de Alemania Fuente: Archivo - Crédito: Gentileza Jerónimo Bucher

27 de Abril de 2019

Jerónimo Batista Bucher, un argentino de 21 años, será destacado por Harvad y el Massachusetts Institute of Technology (MIT) como uno de los 100 Líderes del Futuro a nivel global, por sus trabajos contra la contaminación plástica.

No es la primera vez que el joven, oriundo de Vicente López que aún vive con sus padres, es distinguido en los medios de comunicación. A fines de 2017, dio que hablar cuando cautivó a Angela Merkel, luego de ser seleccionado para participar de la Cumbre Mundial de Jóvenes del G-20 en Alemania.

En esa oportunidad, Batista Bucher fue seleccionado para leer un texto que resaltó la importancia de "empoderar soluciones sustentables para los problemas del medio ambiente" y del "consumo responsable de nuestros recursos limitados". La canciller alemana siguió atentamente el apasionado discurso y luego lo felicitó personalmente.



Cautivó a Angela Merkel en el G-20 de Alemania. Fuente: Archivo - Crédito: Gentileza Jerónimo Bucher

Con solo 18 años creó Sorui, una máquina que fabrica vasos ecológicos con extractos de algas. Su objetivo: detener la contaminación plástica. Además, es dueño de una startup que busca innovar en materia de sustentabilidad.

Batista Bucher está en cuarto año de la carreta de Biotecnología y Electrónica en la Universidad de San Martín (Unsam), institución que le brindó un espacio para seguir desarrollando este emprendimiento. Según publicó el diario Clarín, en junio viajará a Boston, Estados Unidos, para recibir la distinción y participar de debates en Cambridge, donde será becado por una semana.




Entrevista a Jerónimo Bucher, creador de vasos biodegradables, en julio de 2018

El encuentro será en una sede conjunta de ambas universidades e incluirá una instancia de debate y formación con importantes referentes del mundo interesados en el desarrollo sustentable. Es la quinta vez que se realiza esta distinción para Líderes del Futuro, pero la primera vez en que un argentino es seleccionado.

Batista Bucher tiene premios otorgados por la Cámara de Diputados, la Embajada Británica, la IAE Business School, el Ministerio de Producción y el Invap. Y acumula distinciones nacionales e internacionales con una velocidad que sorprende.



Fuente:https://www.lanacion.com.ar/sociedad/un-joven-argentino-fue-elegido-harvard-mit-nid2242426

HR/BUS/GINT-People analytics reveals three things HR may be getting wrong

The following information is used for educational purposes only.


People analytics reveals three things HR may be getting wrong

July 2016

By Henri de Romrée, Bruce Fecheyr-Lippens, and Bill Schaninger


More sophisticated analyses of big data are helping companies identify, recruit, and reward the best personnel. The results can run counter to common wisdom.


Bill James, the factory watchman turned baseball historian and statistician, once observed, “There will always be people who are ahead of the curve, and people who are behind the curve. But knowledge moves the curve.”1 Some companies are discovering that if they employ the latest in data analytics, they can find, deploy, and advance more people on the right side of the curve—even if the results at first appear counterintuitive.

Over the past decade, big data analytics has been revolutionizing the way many companies do business. Chief marketing officers track detailed shopping patterns and preferences to predict and inform consumer behavior. Chief financial officers use real-time, forward-looking, integrated analytics to better understand different business lines. And now, chief human-resources officers are starting to deploy predictive talent models that can more effectively—and more rapidly—identify, recruit, develop, and retain the right people. Mapping HR data helps organizations identify current pain points and prioritize future analytics investments. Surprisingly, however, the data do not always point in the direction that more seasoned HR officers might expect. Here are three examples.

1. Choosing where to cast the recruiting net

A bank in Asia had a well-worn plan for hiring: recruit the best and the brightest from the highest-regarded universities. The process was one of many put to the test when the company, which employed more than 8,000 people across 30 branches, began a major organizational restructuring. As part of the effort, the bank turned to data analytics to identify high-potential employees, map new roles, and gain greater insight into key indicators of performance.

Thirty data points aligned with five categories—demographics, branch information, performance, professional history, and tenure—were collected for each employee, using existing sources. Analytics were then applied to identify commonalities among high (and low) performers. This information, in turn, helped create profiles for employees with a higher likelihood of succeeding in particular roles.

Further machine learning–based analysis revealed that branch and team structures were highly predictive of financial outcomes. It also highlighted how a few key roles had a particularly strong impact on the bank’s overall success. As a result, executives built new organizational structures around key teams and talent groups. In many instances, previous assumptions about how to find the right internal people for new roles were upended.

Whereas the bank had always thought top talent came from top academic programs, for example, hard analysis revealed that the most effective employees came from a wider variety of institutions, including five specific universities and an additional three certification programs. An observable correlation was evident between certain employees who were regarded as “top performers” and those who had worked in previous roles, indicating that specific positions could serve as feeders for future highfliers. Both of these findings have since been applied in how the bank recruits, measures performance, and matches people to roles. The results: a 26 percent increase in branch productivity (as measured by the number of full-time employees needed to support revenue) and a rate of conversion of new recruits 80 percent higher than before the changes were put in place. During the same period, net income also rose by 14 percent.

2. Cutting through the hiring noise and bias

The democracy of numbers can also help organizations eliminate unconscious preferences and biases, which can surface even when those responsible have the best of intentions. For instance, a professional-services company had been nearly overwhelmed by the 250,000 job applications it received every year. By introducing more advanced automation, it sought to reduce the costs associated with the initial résumé-screening process, and to improve screening effectiveness. One complication was the aggressive goals the company had simultaneously set for hiring more women, prompting concern that a machine programmed to mine for education and work experience might undermine that effort.

The worries proved unwarranted. The algorithm adapted by HR took into account historical recruiting data, including past applicant résumés and, for those who were extended offers previously, their decisions on whether to accept. When linked to the company’s hiring goals, the model successfully identified those candidates most likely to be hired and automatically passed them on to the next stage of the recruiting process. Those least likely to be hired were automatically rejected. With a clearer field, expert recruiters were freer to focus on the remaining candidates to find the right fit. The savings associated with the automation of this step, which encompassed more than 55 percent of the résumés, delivered a 500 percent return on investment. What’s more, the number of women who passed through automated screening—each one on merit—represented a 15 percent increase over the number who had passed through manual screening. The foundational assumption—that screening conducted by humans would increase gender diversity more effectively—was proved incorrect.


3. Addressing attrition by improving management

Too often, companies seek to win the talent war by throwing ever more money into the mix. One example was a major US insurer that had been facing high attrition rates; it first sought, with minimal success, to offer bonuses to managers and employees who opted to remain. Then the company got smarter. It gathered data to help create profiles of at-risk workers; the intelligence included a range of information such as demographic profile, professional and educational background, performance ratings, and, yes, levels of compensation. By applying sophisticated data analytics, a key finding rose to the fore: employees in smaller teams, with longer periods between promotions and with lower-performing managers, were more likely to leave.

Once these high-risk employees had been identified, more informed efforts were made to convince them to stay. Chiefly, these involved greater opportunities for learning development and more support from a stronger manager. Bonuses, on the other hand, proved to have little if any effect. As a result, funds that might have been allocated to ineffectual compensation increases were instead invested in learning development for employees and improved training for managers. Performance and retention both improved, with significant savings left over—showing yet again the value of digging into the data at hand. When well applied, people analytics is fairer, has greater impact, and is ultimately more time and cost-effective. It can move everyone up the knowledge curve—often times in counterintuitive ways.

About the author(s)

Henri de Romrée is a partner in McKinsey’s Brussels office, where Bruce Fecheyr-Lippens is an associate partner; Bill Schaninger is a senior partner in the Philadelphia office.

The authors wish to thank Emily Caruso and Alexander DiLeonardo for their contributions this article.



Source:https://www.mckinsey.com/business-functions/organization/our-insights/people-analytics-reveals-three-things-hr-may-be-getting-wrong

Saturday, April 27, 2019

AI/GINT-Derisking machine learning and artificial intelligence, by Bernhard Babel, Kevin Buehler, Adam Pivonka, Bryan Richardson, and Derek Waldron

The following information is used for educational purposes only.


Derisking machine learning and artificial intelligence

February 2019

By Bernhard Babel, Kevin Buehler, Adam Pivonka, Bryan Richardson, and Derek Waldron


The added risk brought on by the complexity of machine-learning models can be mitigated by making well-targeted modifications to existing validation frameworks.


Machine learning and artificial intelligence are set to transform the banking industry, using vast amounts of data to build models that improve decision making, tailor services, and improve risk management. According to the McKinsey Global Institute, this could generate value of more than $250 billion in the banking industry. 1 (*)

But there is a downside, since machine-learning models amplify some elements of model risk. And although many banks, particularly those operating in jurisdictions with stringent regulatory requirements, have validation frameworks and practices in place to assess and mitigate the risks associated with traditional models, these are often insufficient to deal with the risks associated with machine-learning models.

Conscious of the problem, many banks are proceeding cautiously, restricting the use of machine-learning models to low-risk applications, such as digital marketing. Their caution is understandable given the potential financial, reputational, and regulatory risks. Banks could, for example, find themselves in violation of antidiscrimination laws, and incur significant fines—a concern that pushed one bank to ban its HR department from using a machine-learning résumé screener. A better approach, however, and ultimately the only sustainable one if banks are to reap the full benefits of machine-learning models, is to enhance model-risk management.

Regulators have not issued specific instructions on how to do this. In the United States, they have stipulated that banks are responsible for ensuring that risks associated with machine-learning models are appropriately managed, while stating that existing regulatory guidelines, such as the Federal Reserve’s “Guidance on Model Risk Management” (SR11-7), are broad enough to serve as a guide.2
Enhancing model-risk management to address the risks of machine-learning models will require policy decisions on what to include in a model inventory, as well as determining risk appetite, risk tiering, roles and responsibilities, and model life-cycle controls, not to mention the associated model-validation practices. The good news is that many banks will not need entirely new model-validation frameworks. Existing ones can be fitted for purpose with some well-targeted enhancements.

New risks, new policy choices, new practices

There is no shortage of news headlines revealing the unintended consequences of new machine-learning models. Algorithms that created a negative feedback loop were blamed for the “flash crash” of the British pound by 6 percent in 2016, for example, and it was reported that a self-driving car tragically failed to properly identify a pedestrian walking her bicycle across the street.

The cause of the risks that materialized in these machine-learning models is the same as the cause of the amplified risks that exist in all machine-learning models, whatever the industry and application: increased model complexity. Machine-learning models typically act on vastly larger data sets, including unstructured data such as natural language, images, and speech. The algorithms are typically far more complex than their statistical counterparts and often require design decisions to be made before the training process begins. And machine-learning models are built using new software packages and computing infrastructure that require more specialized skills.

The response to such complexity does not have to be overly complex, however. If properly understood, the risks associated with machine-learning models can be managed within banks’ existing model-validation frameworks, as the exhibit below illustrates.

Highlighted in the exhibit are the modifications made to the validation framework and practices employed by Risk Dynamics, McKinsey’s model-validation arm. This framework, which is fully consistent with SR11-7 regulations and has been used to validate thousands of traditional models in many different fields of banking, examines eight risk-management dimensions covering a total of 25 risk elements. By modifying 12 of the elements and adding only six new ones, institutions can ensure that the specific risks associated with machine learning are addressed.

Exhibit (See source article)

The six new elements

The six new elements—interpretability, bias, feature engineering, hyperparameters, production readiness, and dynamic model calibration—represent the most substantive changes to the framework.

Interpretability

Machine-learning models have a reputation of being “black boxes.” Depending on the model’s architecture, the results it generates can be hard to understand or explain. One bank worked for months on a machine-learning product-recommendation engine designed to help relationship managers cross-sell. But because the managers could not explain the rationale behind the model’s recommendations, they disregarded them. They did not trust the model, which in this situation meant wasted effort and perhaps wasted opportunity. In other situations, acting upon (rather than ignoring) a model’s less-than-transparent recommendations could have serious adverse consequences.

The degree of interpretability required is a policy decision for banks to make based on their risk appetite. They may choose to hold all machine-learning models to the same high standard of interpretability or to differentiate according to the model’s risk. In the United States, models that determine whether to grant credit to applicants are covered by fair-lending laws. The models therefore must be able to produce clear reason codes for a refusal. On the other hand, banks might well decide that a machine-learning model’s recommendations to place a product advertisement on the mobile app of a given customer poses so little risk to the bank that understanding the model’s reasons for doing so is not important.

Validators need also to ensure that models comply with the chosen policy. Fortunately, despite the black-box reputation of machine-learning models, significant progress has been made in recent years to help ensure their results are interpretable. A range of approaches can be used, based on the model class:

*Linear and monotonic models (for example, linear-regression models): linear coefficients help reveal the dependence of a result on the output.

*Nonlinear and monotonic models, (for example, gradient-boosting models with monotonic constraint): restricting inputs so they have either a rising or falling relationship globally with the dependent variable simplifies the attribution of inputs to a prediction.

*Nonlinear and nonmonotonic (for example, unconstrained deep-learning models): methodologies such as local interpretable model-agnostic explanations or Shapley values help ensure local interpretability.

Bias

A model can be influenced by four main types of bias: sample, measurement, and algorithm bias, and bias against groups or classes of people. The latter two types, algorithmic bias and bias against people, can be amplified in machine-learning models.

For example, the random-forest algorithm tends to favor inputs with more distinct values, a bias that elevates the risk of poor decisions. One bank developed a random-forest model to assess potential money-laundering activity and found that the model favored fields with a large number of categorical values, such as occupation, when fields with fewer categories, such as country, were better able to predict the risk of money laundering.

To address algorithmic bias, model-validation processes should be updated to ensure appropriate algorithms are selected in any given context. In some cases, such as random-forest feature selection, there are technical solutions. Another approach is to develop “challenger” models, using alternative algorithms to benchmark performance.

To address bias against groups or classes of people, banks must first decide what constitutes fairness. Four definitions are commonly used, though which to choose may depend on the model’s use:

*Demographic blindness: decisions are made using a limited set of features that are highly uncorrelated with protected classes, that is, groups of people protected by laws or policies.

*Demographic parity: outcomes are proportionally equal for all protected classes.

*Equal opportunity: true-positive rates are equal for each protected class.

*Equal odds: true-positive and false-positive rates are equal for each protected class.

Validators then need to ascertain whether developers have taken the necessary steps to ensure fairness. Models can be tested for fairness and, if necessary, corrected at each stage of the model-development process, from the design phase through to performance monitoring.

Feature engineering

Feature engineering is often much more complex in the development of machine-learning models than in traditional models. There are three reasons why. First, machine-learning models can incorporate a significantly larger number of inputs. Second, unstructured data sources such as natural language require feature engineering as a preprocessing step before the training process can begin. Third, increasing numbers of commercial machine-learning packages now offer so-called AutoML, which generates large numbers of complex features to test many transformations of the data. Models produced using these features run the risk of being unnecessarily complex, contributing to overfitting. For example, one institution built a model using an AutoML platform and found that specific sequences of letters in a product application were predictive of fraud. This was a completely spurious result caused by the algorithm’s maximizing the model’s out-of-sample performance.

In feature engineering, banks have to make a policy decision to mitigate risk. They have to determine the level of support required to establish the conceptual soundness of each feature. The policy may vary according to the model’s application. For example, a highly regulated credit-decision model might require that every individual feature in the model be assessed. For lower-risk models, banks might choose to review the feature-engineering process only: for example, the processes for data transformation and feature exclusion.

Validators should then ensure that features and/or the feature-engineering process are consistent with the chosen policy. If each feature is to be tested, three considerations are generally needed: the mathematical transformation of model inputs, the decision criteria for feature selection, and the business rationale. For instance, a bank might decide that there is a good business case for using debt-to-income ratios as a feature in a credit model but not frequency of ATM usage, as this might penalize customers for using an advertised service.

Hyperparameters

Many of the parameters of machine-learning models, such as the depth of trees in a random-forest model or the number of layers in a deep neural network, must be defined before the training process can begin. In other words, their values are not derived from the available data. Rules of thumb, parameters used to solve other problems, or even trial and error are common substitutes. Decisions regarding these kinds of parameters, known as hyperparameters, are often more complex than analogous decisions in statistical modeling. Not surprisingly, a model’s performance and its stability can be sensitive to the hyperparameters selected. For example, banks are increasingly using binary classifiers such as support-vector machines in combination with natural-language processing to help identify potential conduct issues in complaints. The performance of these models and the ability to generalize can be very sensitive to the selected kernel function.

Validators should ensure that hyperparameters are chosen as soundly as possible. For some quantitative inputs, as opposed to qualitative inputs, a search algorithm can be used to map the parameter space and identify optimal ranges. In other cases, the best approach to selecting hyperparameters is to combine expert judgment and, where possible, the latest industry practices.

Production readiness

Traditional models are often coded as rules in production systems. Machine-learning models, however, are algorithmic, and therefore require more computation. This requirement is commonly overlooked in the model-development process. Developers build complex predictive models only to discover that the bank’s production systems cannot support them. One US bank spent considerable resources building a deep learning–based model to predict transaction fraud, only to discover it did not meet required latency standards.

Validators already assess a range of model risks associated with implementation. However, for machine learning, they will need to expand the scope of this assessment. They will need to estimate the volume of data that will flow through the model, assessing the production-system architecture (for example, graphics-processing units for deep learning), and the runtime required.

Dynamic model calibration

Some classes of machine-learning models modify their parameters dynamically to reflect emerging patterns in the data. This replaces the traditional approach of periodic manual review and model refresh. Examples include reinforcement-learning algorithms or Bayesian methods. The risk is that without sufficient controls, an overemphasis on short-term patterns in the data could harm the model’s performance over time.

Banks therefore need to decide when to allow dynamic recalibration. They might conclude that with the right controls in place, it is suitable for some applications, such as algorithmic trading. For others, such as credit decisions, they might require clear proof that dynamic recalibration outperforms static models.

With the policy set, validators can evaluate whether dynamic recalibration is appropriate given the intended use of the model, develop a monitoring plan, and ensure that appropriate controls are in place to identify and mitigate risks that might emerge. These might include thresholds that catch material shifts in a model’s health, such as out-of-sample performance measures, and guardrails such as exposure limits or other, predefined values that trigger a manual review.

Banks will need to proceed gradually. The first step is to make sure model inventories include all machine learning–based models in use. You may be surprised to learn how many there are. One bank’s model risk-management function was certain the organization was not yet using machine-learning models, until it discovered that its recently established innovation function had been busy developing machine-learning models for fraud and cybersecurity.

From here, validation policies and practices can be modified to address machine-learning-model risks, though initially for a restricted number of model classes. This helps build experience while testing and refining the new policies and practices. Considerable time will be needed to monitor a model’s performance and finely tune the new practices. But over time banks will be able to apply them to the full range of approved machine-learning models, helping companies mitigate risk and gain the confidence to start harnessing the full power of machine learning.

About the author(s)

Bernhard Babel is a partner in McKinsey’s Cologne office; Kevin Buehler is a senior partner in the New York office, where Adam Pivonka is an associate partner and Derek Waldron is a partner; Bryan Richardson is a senior expert in the Vancouver office.

The authors wish to thank Roger Burkhardt, Pankaj Kumar, Ryan Mills, Marc Taymans, Didier Vila, and Sung-jin Yoo for their contributions to this article.



Source:https://www.mckinsey.com/business-functions/risk/our-insights/derisking-machine-learning-and-artificial-intelligence

AI/GINT-The ethics of artificial intelligence, by Michael Chui, Chris Wigley and Simon London

The following information is used for educational purposes only.



The ethics of artificial intelligence

January 2019 | Podcast (*)


In this episode of the McKinsey Podcast, Simon London speaks with MGI partner Michael Chui and McKinsey partner Chris Wigley about how companies can ethically deploy artificial intelligence.


About the author(s)

Michael Chui is a partner of the McKinsey Global Institute and is based in McKinsey’s San Francisco office. Chris Wigley is a partner in the London office. Simon London, a member of McKinsey Publishing, is based in McKinsey’s Silicon Valley office.


(*) (Podcast available in source article)

Transcript:

Simon London
Hello, and welcome to this edition of the McKinsey Podcast, with me, Simon London. Today we're going to be talking about the ethics of artificial intelligence. At the highest level, is it ethical to use AI to enable, say, mass surveillance or autonomous weapons? On the flip side, how can AI be used for good, to tackle pressing societal challenges? And in day-to-day business, how can companies deploy AI in ways that ensure fairness, transparency, and safety?
Simon London
To discuss these issues, I sat down with Michael Chui and Chris Wigley. Michael is a partner with the McKinsey Global Institute and has led multiple research projects on the impact of AI on business and society. Chris is both a McKinsey partner and chief operating officer at QuantumBlack, a London-based analytics company that uses AI extensively in his work with clients. Chris and Michael, welcome to the podcast.
Chris Wigley
Great to be here.
Michael Chui
Terrific to join you.
Simon London
This is a big, hairy topic. Why don't we start with the broadest of broad brush questions which is, "Are we right to be concerned?" Is the ethics of AI something—whether you're a general manager or a member of the public—that we should be concerned about?
Chris Wigley
Yes, I think the simple answer to this is that the concerns are justified. We are right to worry about the ethical implications of AI. Equally, I think we need to celebrate some of the benefits of AI. The high-level question is, "How do we get the balance right between those benefits and the risks that go along with them?"
Chris Wigley
On the benefit side, we can already see hundreds of millions, even billions of people using and benefiting from AI today. It's important we don't forget that. Across all of their daily use in search and things like maps, health technology, assistants like Siri and Alexa, we're all benefiting a lot from the convenience and the enhanced decision-making powers that AI brings us.
Chris Wigley
But on the flip side, there are justifiable concerns around jobs that arise from automation of roles that AI enables, from topics like autonomous weapons, the impact that some AI-enabled spaces and forums can have on the democratic process, and even things emerging like deep fakes, which is video created via AI which looks and sounds like your president or a presidential candidate or a prime minister or some kind of public figure saying things that they have never said. All of those are risks we need to manage. But at the same time we need to think about how we can enable those benefits to come through.
Michael Chui
To add to what Chris was saying, you can think about ethics in two ways. One is this is an incredibly powerful tool. It's a general-purpose technology—people have called it—and one question is, "For what purposes do you want to use it?" Do you want to use it for good or for ill?
Michael Chui
There's a question about what the ethics of that are. But again, you can use this tool for doing good things, for improving people's health. You can also use it to hurt people in various ways. That's one level of questions.
Michael Chui
I think there's a separate level of questions which are equally important. Once you've decided perhaps I'm going to use it for a good purpose, I'm going to try to improve people's health, the other ethical question is, "In the execution of trying to use it for good, are you also doing the right ethical things?"
Michael Chui
Sometimes you could have unintended consequences. You can inadvertently introduce bias in various ways despite your intention to use it for good. You need to think about both levels of ethical questions.
Simon London
Michael, I know you just completed some research into the use of AI for good. Give us an overview. What did you find when you looked at that?
Michael Chui
One of the things that we were looking at was how could you direct this incredibly powerful set of tools to improving social good. We looked at 160 different individual potential cases of AI to improve social good, everything from improving healthcare and public health around the world to improving disaster recovery. Looking at the ability to improve financial inclusion, all of these things.
Michael Chui
For pretty much every one of the UN's Sustainable Development Goals, there are a set of use cases where AI can actually help improve some of our progress towards reaching those Sustainable Development Goals.
Simon London
Give us some examples. What are a couple of things? Bring it to life.
Michael Chui
Some of the things that AI is particularly good at—or the new generations of AI are particularly good at—are analyzing images, for instance. That has broad applicability. Take, for example, diagnosing skin cancer. One thing you could imagine doing is taking a mobile phone and uploading an image and training an AI system to say, "Is this likely to be skin cancer or not?"
Michael Chui
There aren't dermatologists everywhere in the world where you might want to diagnose skin cancer. So being able to do that, and again, the technology is not perfect yet, but can we just improve our accessibility to healthcare through this technology?
Michael Chui
On a very different scale, we have huge amounts of satellite imagery. The entire world's land mass is imaged in some cases several times a day. In a disaster situation, it can be very difficult in the search for humans, to be able to identify which buildings are still there, which healthcare facilities are still intact, where are there passable roads, where aren't there passable roads.
Michael Chui
We've seen the ability to use artificial-intelligence technology, particularly deep learning, be able to very quickly, much more quickly than a smaller set of human beings, identify these features on satellite imagery, and then be able to divert or allocate resources, emergency resources, whether it's healthcare workers, whether it's infrastructure construction workers, to better allocate those resources more quickly in a disaster situation.
Simon London
So disaster response, broadly speaking—there's a whole set of cases around that.
Michael Chui
Absolutely. It's a place where speed is of the essence. When these automated machines using AI are able to accelerate our ability to deploy resources, it can be incredibly impactful.
Chris Wigley
One of the things that I find most exciting about this is linking that to our day-to-day work as well. So we've had a QuantumBlack team, for example, working with a city over the last few months recovering from a major gas explosion on the outskirts of that city. That's really helped to accelerate the recovery of that infrastructure for the city, helped the families who are affected by that, helped the infrastructure like schools and so on, using a mix of the kinds of imagery techniques that Michael's spoken about.
Chris Wigley
Also there's the commuting patterns—the communications data that you can aggregate to look at how people travel around the city and so on to optimize the work of those teams who are doing the disaster recovery.
Chris Wigley
We've also deployed these kinds of machine-learning techniques to look at things like, "What are the root causes of people getting addicted to opioids? And what might be some of the most effective treatments?" to things like the spread of disease in epidemiology, looking at the spread of diseases like measles in Croatia. Those are all things that we’ve been a part of in the last 12 months, often on a pro bono basis, bringing these technologies to life to really solve concrete societal problems.
Simon London
The other thing that strikes me in the research is that very often you are dealing with more vulnerable populations when you're dealing with some of these societal-good issues. So yes, there are many ways in which you can point AI at these societal issues, but the risks in implementation are potentially higher because the people involved are in some sense vulnerable.
Michael Chui
I think we find that to be the case. Sometimes AI can improve social good by identifying vulnerable populations. But in some cases that might hurt the people that you’re trying to help the most. Because when you’re identifying vulnerable populations, then sometimes bad things can happen to them, whether it’s discrimination or acts of malicious intent.
Michael Chui
To that second level that we talked about before, how you actually implement AI within a specific use case also brings to mind a set of ethical questions about how that should be done. That’s as true in for-profit cases as it for not-profit cases. That’s as true in commercial cases as it is in AI for social good.
Simon London
Let's dive deeper on those risks then, whether you're in a for-profit or a not-for-profit environment. What are the main risks and ethical issues related to the deployment, AI in action?
Chris Wigley
One of the first we should touch on is around bias and fairness. We find it helpful to think about this in three levels, the first being bias itself. We might think about this where a data set that we're drawing on to build a model doesn't reflect the population that the model will be applied to or used for.
Chris Wigley
There have been various controversies around facial-recognition software not working as well for women, for people of color, because it's been trained on a biased data set which has too many white guys in it. There are various projects afoot to try and address that kind of issue. That's the first level, which is bias. Does the data set reflect the population that you're trying to model?
Chris Wigley
You then get into fairness which is a second level. Saying, "Look, even if the data set that we're drawing on to build this model accurately reflects history, what if that history was by its nature unfair?" An example domain here is around predictive policing. Even if the data set accurately reflects a historical reality or a population, are the decisions that we make on top of that fair?
Chris Wigley
Then the final one is [about whether the use of data is] unethical. Are there data sets and models that we could build and deploy which could just be turned to not just unfair but unethical ends? We've seen debates on this between often the very switched-on employees of some of the big tech firms and some of the work that those tech firms are looking at doing.
Chris Wigley
Different groups' definitions of unethical will be different. But thinking about it at those three levels of, one: bias. Does the data reflect the population? Two: fairness. Even if it does, does that mean that we should continue that in perpetuity? And three: unethical. "Are there things that these technologies can do which we should just never do?" is a helpful of way of separating some of those issues.
Michael Chui
I think Chris brings up a really important point. We often hear about this term algorithmic bias. That suggests that the software engineer embeds their latent biases or blatant biases into the rules of the computer program. While that is something to guard against, the more insidious and perhaps more common for this type of technology is the biases that might be latent within the data sets as Chris was mentioning.
Michael Chui
Some of that comes about sometimes because it's the behavior of people who are biased and therefore you see it. Arrest records being biased against certain racial groups would be an example. Sometimes it just comes about because of the way that we've collected the data.
Michael Chui
That type of subtlety is really important. It's not just about making sure that the software engineer isn't biased. You really need to understand the data deeply if you're going to understand whether there's bias there.
Simon London
Yes, I think there's that famous example of potholes in Boston I think it was using the accelerometers in smart phones to identify when people are driving, do they go over potholes. The problem with that at the time that this data was collected is that a lot of the more disadvantaged populations didn't have smart phones. So there was more data on potholes in rich neighborhoods. [The Street Bump program is not in active use by the city of Boston.]
Chris Wigley
There's a bunch of other risks that we also need to take into account. If the bias and fairness gives us an ethical basis for thinking about this, we also face very practical challenges and risks in this technology. So, for example, at QuantumBlack, we do a lot of work in the pharmaceutical industry. We've worked on topics like patient safety in clinical trials. Once we're building these technologies into the workflows of people who are making decisions in clinical trials about patient safety, we have to be really, really thoughtful about the resilience of those models in operation, how those models inform the decision making of human beings but don't replace it, so we keep a human in the loop, how we ensure that the data sources that feed into that model continue to reflect the reality on the ground, and that those models get retrained over time and so on.
Chris Wigley
In those kinds of safety critical or security critical applications, this becomes absolutely essential. We might add to this areas like critical infrastructure, like electricity networks and smart grids, airplanes. There are all sorts of areas where there is a vital need to ensure the operational resilience of these kinds of technologies as well.
Michael Chui
This topic of the safety of AI is a very hot one right now, particularly as you're starting to see it applied in places like self-driving cars. You're seeing it in healthcare, where the potential impact on a person's safety is very large.
Michael Chui
In some cases we have a history of understanding how to try to ensure higher levels of safety in those fields. Now we need to apply them to these AI technologies because many of the engineers in these fields don't understand that technology yet, although they're growing in that area. That's an important place to look in terms of the intersection of safety and AI.
Chris Wigley
And the way that some people have phrased that, which I like is, "What is the building code equivalent for AI?" I was renovating an apartment last year. The guy comes around from the local council and says, "Well, if you want to put a glass pane in here, because it's next to a kitchen, it has to be 45-minutes fire resistant." That's evolved through 150, 200 years of various governments trying to do the right thing and ensure that people are building buildings which are safe for human beings to inhabit and minimize things like fire risk.
Chris Wigley
We're still right at the beginning of that learning curve with AI. But it's really important that we start to shape out some of those building code equivalents for bias, for fairness, for explainability, for some of the other topics that we'll touch on.
Simon London
Chris, you just mentioned explainability. Just riff on that a little bit more. What's the set of issues there?
Chris Wigley
Historically some of the most advanced machine learning and deep-learning models have been what we might call a black box. We know what the inputs into them are. We know that they usefully solve an output question like a classification question. Here's an image of a banana or of a tree.
Chris Wigley
But we don't know what is happening on the inside of those models. When you get into highly regulated environments like the pharmaceutical industry and also the banking industry and others, understanding how those models are making those decisions, which features are most important, becomes very important.
Chris Wigley
To take an example from the banking industry, in the UK the banks have recently been fined over 30 billion pounds, and that's billion with a B for mis-selling of [payment] protection insurance. When we're talking to some of the banking leaders here, they say, "Well, you know, as far as we understand it, AI is very good at responding to incentives." We know that some of the historic problems were around sales teams that were given overly aggressive incentives. What if we incentivize the AI in the wrong way? How do we know what the AI is doing? How can we have that conversation with the regulator?
Chris Wigley
We've been doing a lot of work recently around, "How can we use AI to explain what AI is doing?" The way that that works in practice we've just done a test of this with a big bank in Europe in a safe area. This is how the relationship managers talk to their corporate clients. What are they talking to them about?
Chris Wigley
The first model is a deep-learning model, which we call a propensity model. What is the propensity of a customer to do something, to buy a product, to stop using the service? We then have a second machine-learning model, which is querying the first model millions of times to try and unearth why it's made that decision.
Chris Wigley
It's deriving what the features are that are most important. Is it because of the size of the company? Is it because of the products they already hold? Is it because of any of hundreds of other features?
Chris Wigley
We then have a third machine-learning model, which is then translating the insights of the second model back into plain English for human beings to understand. If I'm the relationship manager in that situation, I don't need to understand all of that complexity. But suddenly I get three or four bullet points written in plain English that say, "Not just here is the recommendation of what to do, but also here's why." It's likely because of the size of that company, of the length of the relationship we've had with that customer, whatever it is, that actually A) explains what's going on in the model and B) allows them to have a much richer conversation with their customer.
Chris Wigley
Just to close that loop, the relationship manager can then feed back into the model, "Yes, this was right. This was a useful conversation, or no, it wasn't." So we continue to learn. Using AI to explain AI starts to help us to deal with some of these issues around the lack of transparency that we've had historically.
Michael Chui
You could think about the ethical problem being, "What if we have a system that seems to work better than another one, but it's so complex that we can't explain why it works?" These deep-learning systems have millions of simulated neurons. Again trying to explain how that works is really, really difficult.
Michael Chui
In some cases, as Chris was saying, the regulator requires you to explain what happened. Take, for example, the intersection with safety. If a self-driving car makes a left turn instead of hitting the brakes and it causes property damage or hurts somebody, a regulator might say, "Well, why did it do that?"
Michael Chui
And it does call into question, "How do you provide a license?" In some cases what you want to do is examine the system and be able to understand and somehow guarantee that the technical system is working well. Others have said, "You should just give a self-driving car a driving test and then figure out." Some of these questions are very real ones as we try to understand how to use and regulate these systems.
Chris Wigley
And there's a very interesting trade-off often between performance and transparency. Maybe at some point in the future there won't be a trade-off, but at the moment there is. So we might say for a bank that's thinking about giving someone a consumer loan, we could have a black-box model, which gets us a certain level of accuracy, let's say 96, 97 percent accuracy of prediction whether this person will repay. But we don't know why. And so therefore we struggle to explain either to that person or to a regulator why we have or haven't given that person a loan.
Chris Wigley
But there's maybe a different type of model which is more explainable which gets us to 92, 93 percent level of accuracy. We're prepared to trade off that performance in order to have the transparency.
Chris Wigley
If we put that in human terms, let's say we're going in for treatment. And there is a model that can accurately predict whether either a tumor is cancerous or another medical condition is right or wrong. To some extent, as a human being, if we're reassured that this model is right and has been proven to be right in thousands of cases, we actually don't care why it knows as long as it's making a good prediction that a surgeon can act on that will improve our health.
Chris Wigley
We're constantly trying to make these trade-offs between the situations where explainability is important and the situations where performance and accuracy are more important.
Michael Chui
Then for explainability it's partly an ethical question. Sometimes it has to do with just achieving the benefits. We've looked at some companies where they've made the trade-off that Chris suggested, where they've gone to a slightly less performant system because they knew the explainability was important in order for people to accept the system and therefore actually start to use it.
Michael Chui
Change management is one of the biggest problems in AI and other technologies to achieve benefits. And so explainability can make a difference. But as Chris also said, "That can change over time." For instance, I use a car with [anti-lock] braking systems. And the truth is I don't know how that works. And maybe earlier on in that history people were worried: "You're going to let the car brake for itself."
Michael Chui
But now we've achieved a level of comfort because we've discovered this stuff works almost all the time. If we start to see that comfort change in an individual basis as well.
Simon London
I'm going to ask an almost embarrassingly nerdy management question now. Stepping away from the technology, what's our advice to clients about how to address some of these issues? Because some of this feels like it's around risk management. As you think about deploying AI, how do you manage these ethical risks, compliant risks, you could phrase it any number of different ways. What's the generalizable advice?
Michael Chui
Let me start with one piece of advice, which is as much as we expect executives to start to learn about every part of their business and maybe you're going to be a general manager, you're going to need to know something about supply chain, HR strategy, operations, sales and marketing. It is becoming incumbent on every executive to learn more about technology now.
Michael Chui
To the extent to which they need to learn about AI, they're going to need to learn more about what it means to deploy AI in an effective way. We can bring some of the historical practices—you mentioned risk management. Understanding risk is something that we've learned how to do in other fields.
Michael Chui
We can bring some of those tools to bear here when we couple that with the technical knowledge as well. One thing we know about risk management: understand what all the risks are. I think bringing that framework to the idea of AI and its ethics carries over pretty well.
Simon London
Right. So it's not just understanding the technology, but it's also at a certain level understanding the ethics of the technology. At least get in your head what are the ethical or the regulatory or the risk implications of deploying the technology.
Michael Chui
That's exactly right. Take, for example, bias. In many legal traditions around the world, understanding that there are a set of protected classes or a set of characteristics around which we don't want to actually use technology or other systems in order to discriminate.
Michael Chui
That understanding allows you to say, "Okay, we need to test our AI system to make sure it's not creating disparate impact for these populations of people." That's a concept that we can take over. We might need to use other techniques in order to test our systems. But that's something we can bring over from our management practices previously.
Chris Wigley
As a leader thinking about how to manage the risks in this area, dedicating a bit of head space to thinking about it is a really important first step. The second element of this is bring someone in who really understands it. In 2015, so three years ago now, we hired someone into QuantumBlack who is our chief trust officer.
Chris Wigley
No one at the time really knew what that title meant. But we knew that we had to have someone who was thinking about this full time as their job because trust is existential to us. What is the equivalent if you're a leader leading an organization? What are the big questions for you in this area? How can you bring people into the organization or dedicate someone in the organization who has that kind of mind-set or capabilities to really think about this full time?
Michael Chui
To build on that, I think you need to have the right leaders in place. As a leadership team, you need to understand this. But the other important thing is to cascade this through the rest of the organization, understanding that change management is important as well.
Michael Chui
Take the initiatives people had to do in order to comply with GDPR. That's something that again I'm not saying that if you're GDPR compliant, you're ethical, but think about all the processes that you had to cascade not only for the leaders to understand but all of your people and your processes to make sure that they incorporate an understanding of GDPR.
Michael Chui
I think the same thing is true in terms of AI and ethics as well. You think about everyone needs to understand a little bit about AI, and they have to understand, "How can we deploy this technology in a way that's ethical, in a way that's compliant with regulations?" That's true for the entire organization. It might start at the top, but it needs to cascade through the rest of the organization.
Chris Wigley
We also have to factor in the risk of not innovating in this space, the risk of not embracing these technologies, which is huge. I think there's this relationship between risk and innovation that is really important and a relationship between ethics and innovation.
Chris Wigley
We need an ethical framework and an ethical set of practices that can enable innovation. If we get that relationship right, it should become a flywheel of positive impact where we have an ethical framework which enables us to innovate, which enables us to keep informing our ethical framework, which enables us to keep innovating. That positive momentum is the flip side of this. There's a risk of not doing this as much as there are many risks in how we do it.
Simon London
Let's talk a little bit more about this issue of algorithmic bias, whether it's in the data set or actually in the system design. Again very practically, how do you guard against it?
Chris Wigley
We really see the answer to the bias question as being one of diversity. We can think about that in four areas. One is diversity of background of the people on a team. There's this whole phenomenon around group think that people have blamed for all sorts of disasters. We see that as being very real.
Chris Wigley
We have 61 different nationalities across QuantumBlack. We have as many or more academic backgrounds. Our youngest person is in their early 20s. Our oldest person in the company is in their late 60s. All of those elements of diversity of background come through very strongly. We were at one point over 50 percent women in our technical roles. We've dropped a bit below that as we've scaled. But we're keen to get back. Diversity of people is one big area.
Chris Wigley
The second is diversity of data. We touched on this topic of bias in the data sets not reflecting the populations that the model is looking at. We can start to understand and address those issues of data bias through diversity of data sets, triangulating one data set against another, augmenting one data set with another, continuing to add more and more different data perspectives onto the question that we're addressing.
Chris Wigley
The third element of diversity is diversity of modeling. We very rarely just build a single model to address a question or to capture an opportunity. We're almost always developing what we call ensemble models that might be a combination of different modeling techniques that complement each other and get us to an aggregate answer that is better than any of the individual models.
Chris Wigley
The final element of diversity we think about is diversity of mind-set. That can be diversity along dimensions like the Myers-Briggs Type Indicator or all of these other types of personality tests.
Chris Wigley
But we also, as a leadership team, challenge ourselves in much simpler terms around diversity. We sometimes nominate who's going to play the Eeyore role and who's going to play the Tigger role when we're discussing a decision. Framing it even in those simple Winnie the Pooh terms can help us to bring that diversity into the conversation. Diversity of background, diversity of data, diversity of modeling techniques, and diversity of mind-sets. We find all of those massively important to counter bias.
Michael Chui
So adding to the diversity points that Chris made, there are some process things that are important to do as well. One thing you can do as you start to validate the models that you've created is have them externally validated. Have someone else who has a different set of incentives check to make sure that in fact you've understood whether there's bias there and understood whether there's unintended bias there.
Michael Chui
Some of the other things that you want to do is test the model either yourself or externally for specific types of bias. Depending on where you are, there might be classes of individuals or populations that you are not permitted to have disparate impact on. One of the important things to understand there is not only is race or sex or one of these protected characteristics—
Simon London
And a protected characteristic is a very specific legal category, right? And it will vary by jurisdiction?
Michael Chui
I'm not a lawyer. But, yes, depending on which jurisdiction you're in, in some cases, the law states, "You may not discriminate or have disparate impact against certain people with a certain characteristic." In order to ensure that you're not discriminating or having disparate impact is not only that you don't have gender as one of the fields in your database.
Michael Chui
Because sometimes what happens is you have these, to get geeky, these co-correlates, these other things which are highly correlated with an indicator of a protected class. And so understanding that and being able to test for disparate impact is a core competency to make sure that you're managing for biases.
Chris Wigley
One of the big issues, once the model is up and running, is, "How can we ensure that while we've tested it as it's being developed, that it maintains in operation both accuracy and not being biased." We're in the reasonably early stages of this as an industry on ensuring resilience and ethical performance in production.
Chris Wigley
But some simple steps like, for example, having a process check to say, "When was the last time that this model was validated?" It sounds super simple. If you don't do that, people have very busy lives, and they can just get overlooked. Building in those simple process steps all the way through to the more complicated technology-driven elements of this.
Chris Wigley
We can actually have a second model checking the first model to see if it's suffering from model drift, for example. And then translate that into a very simple kind of red, amber, green dashboard of a model in performance. But a lot of this still relies on having switched-on human beings who maybe get alerted or helped by technology, but who engage their brain on the topic of, "Are these models, once they're up and running, actually still performant?"
Chris Wigley
All sorts of things can trip them up. A data source gets combined upstream and suddenly the data feed that's coming into the model is different from how it used to be. The underlying population in a given area may change as people move around. The technologies themselves change very rapidly. And so that question of how do we create resilient AI, which is stable and robust in production, is absolutely critical, particularly as we introduce AI into more and more critical safety and security and infrastructure systems.
Michael Chui
And the need to update models is a more general problem than just making sure that you don't have bias. It's made even more interesting when there are adversarial cases. When in fact just to say, for instance, you have a system that's designed to detect fraud. People who are fraudulent obviously, don't want to get detected. So they might change their behavior understanding that the model is starting to detect certain things.
Michael Chui
And so again, you really need to understand when you need to update the model whether it's to make sure that you're not introducing bias or just in general to make sure that it's performing.
Chris Wigley
There's an interesting situation in the UK where the UK government has set up a new independent body called the Centre for Data Ethics and Innovation that is really working on balancing these things out. How can you maximize the benefits of AI to society within an ethical framework?
Chris Wigley
And the Centre for Data Ethics and Innovation, or CDEI, is not itself a regulatory body but is advising the various regulatory bodies in the UK like the FCA, which regulates the financial industry and so on. I suspect we'll start to see more and more thinking at a government and inter-government level on these topics. It'll be a very interesting area over the next couple of years.
Simon London
So AI policy broadly speaking is coming into focus and coming to the fore and becoming much more important over time.
Michael Chui
It is indeed becoming more important. But I also think that it's interesting within individual regulatory jurisdictions, whether it's in healthcare or in aviation, whether it's what happens on roads, the degree to which our existing practices can be brought to bear.
Michael Chui
So again as I said, are driving tests the way that we'll be able to tell whether autonomous vehicles should be allowed on the roads? There are things around medical licensure and how is that implicated in terms of the AI systems that we might want to bring to bear. Understanding that tradition and seeing what can be applied to AI already is really important.
Simon London
So what is the standard to which we hold AI? And how does that compare to the standard to which we hold humans?
Michael Chui
Indeed.
Chris Wigley
Absolutely. In the context of something like autonomous vehicles, that's a really interesting question. Because we know that a human population of a certain size that drives a certain amount is likely to have a certain number of accidents a year. Is the right level for allowing autonomous vehicles when it's better than that level or when it's better than that level by a factor of ten?
Chris Wigley
Or do we only allow it when we get to a perfect level? And is that ever possible? I don't think that anyone knows the answer to that question at the moment. But I think that as we start to flesh out these kinds of ethics frameworks around machine learning and AI and so on, we need to deploy them to answer questions like that in a way which various stakeholders in society really buy into.
Chris Wigley
A lot of the answers to fleshing out these ethical questions have to come from engaging with stakeholder groups and engaging with society more broadly, which is in and of itself an entire process and entire skill set that we need more of as we do more AI policy making.
Simon London
Well, thank you, Chris. And thank you, Michael, for a fascinating discussion.
Michael Chui
Thank you.
Chris Wigley
It's been great.


Source:https://www.mckinsey.com/featured-insights/artificial-intelligence/the-ethics-of-artificial-intelligence

ChatGPT, una introducción realista, por Ariel Torres

The following information is used for educational purposes only.           ChatGPT, una introducción realista    ChatGPT parece haber alcanz...