Tag: Trust

  • Trust, creativity, and collaboration are what leads to impact in the arts

    Trust, creativity, and collaboration are what leads to impact in the arts

    Impact in the arts is fundamentally different from other fields. It is built on relationships, trust, and long-term engagement with communities, businesses, and cultural institutions.

    Unlike traditional research models, where success is often measured through large-scale returns or policy influence, impact in the creative industries is deeply personal, embedded in real-world collaborations, and evolves over time.

    For specialist arts institutions, impact is not just about knowledge transfer – it’s about experimental knowledge exchange. It emerges from years of conversations, interdisciplinary convergence, and shared ambitions. This process is not transactional; it is about growing networks, fostering trust, and developing meaningful partnerships that bridge creative research with industry and society.

    The AHRC Impact Acceleration Account (IAA) has provided a vital framework for this work, but to fully unlock the potential of arts-led innovation, it needs to be bigger, bolder, and more flexible. The arts sector thrives on adaptability, yet traditional funding structures often fail to reflect the reality of how embedded impact happens – rarely immediate or linear.

    At the University for the Creative Arts (UCA), we have explored a new model of knowledge exchange—one that moves beyond transactional partnerships to create impact at the convergence of arts, business, culture, and technology.

    From ideas to impact

    At UCA, IAA impact has grown not through top-down frameworks, but through years of relationship-building with creative businesses, independent artists, cultural organisations, and museums. These partnerships are built on trust, long-term engagement, and shared creative exploration, rather than short-term funding cycles.

    Creative industries evolve through conversation, experimentation, and shared risk-taking. Artists, designers, filmmakers, and cultural institutions need time to test ideas, adapt, and develop new ways of working that blend creative practice with commercial and social impact.

    This approach has led to collaborations that demonstrate how arts impact happens in real-time, to name a few:

    • Immersive storytelling and business models – Research in VR and interactive media is expanding the possibilities of digital storytelling, enabling new audience experiences and sustainable commercial frameworks for creative content.
    • Augmented reality and cultural heritage – Digital innovation is enhancing cultural engagement, creating interactive heritage experiences that bridge physical and virtual worlds, reinforcing cultural sustainability.
    • Sustainable design and material innovation – Design-led projects are exploring circular economy approaches in sports, fashion, and product design, shifting industry mindsets toward sustainability and responsible production.
    • Photography and social change – Research in archival and curatorial practice is reshaping how marginalised communities are represented in national collections, influencing curatorial strategies and institutional policies.

    These projects are creative interventions that converge research, industry, and social change. We don’t just measure impact; we create it through action.

    A different model of knowledge exchange

    The AHRC IAA has provided an important platform for arts-led impact, but if we are serious about supporting creative industries as a driver of economic, cultural, and social transformation, we must rethink how impact is funded and measured. Traditional funding models often overlook the long-term, embedded collaborations that define arts impact.

    To make the impact funding more effective, we need to:

    • Recognise that creative impact develops over time, often requiring years of conversation, trust-building, and iterative development.
    • Encourage risk-taking and experimentation, allowing researchers and industry partners the flexibility to develop innovative ideas beyond rigid funding categories.
    • Expand the scale and duration of support to enable long-term transformation, allowing small and specialist universities to cultivate deeper, sustained partnerships.

    In academic teaching and training, knowledge exchange must be reconsidered beyond the REF framework. Rather than focusing solely on individual research outputs, assessment frameworks should value collective impact, long-term partnerships, and iterative creative inquiry. Funding models should support infrastructure that enables researchers to develop skills in knowledge exchange, ensuring it is a fundamental pillar of academic and professional growth.

    By embedding knowledge exchange principles into creative education, we can cultivate a new generation of researchers who are not only scholars but also creative change makers, equipped to collaborate with industry, drive cultural innovation, and shape the future of the creative economy.

    A call for bigger, bolder AHRC impact funding

    UCA’s approach demonstrates how arts institutions are developing a new model of impact—one rooted in collaboration, creativity, and social change. However, for this model to thrive, impact funding must evolve to recognise and support the unique ways in which creative research generates real change.

    To keep pace with the evolving needs of cultural, creative, and technology industries, research funding must acknowledge that impact in the arts is about stories, communities, and the human connections that drive transformation. It’s time to expand our vision of what impact means – and to build a funding model that reflects the true value of the arts in shaping business, culture, and society.

    Source link

  • Presidents point to drivers of declining public trust

    Presidents point to drivers of declining public trust

    According to 2024 general election exit polling, 42 percent of voters with college degrees voted for now-President Donald Trump, compared to 56 percent of those without college degrees. Asked how they feel about this growing education gap in the electorate—what researchers call the diploma divide—25 percent of college and university presidents say they’re very or extremely concerned about its implications for their institution.

    More say they’re highly concerned about the growing divide’s impact on higher education in general (58 percent) and on American democracy (64 percent). That’s according to a new analysis of findings from Inside Higher Ed’s 2025 Survey of College and University Presidents, completed with Hanover Research.

    Presidents also offer a scathing review of how higher education has responded to this divide thus far: Just 3 percent think the sector has been very or extremely effective, versus not at all, somewhat or moderately effective. The leaders have a similarly dismal view of how higher education is responding to declining public confidence: A mere 1 percent, rounded up, think it has been highly effective. Much larger shares of presidents think higher education has not been at all effective in responding to the public confidence crisis, with presidents of private nonprofit institutions especially likely to say so, or to the growing education divide in the electorate.

    The Diploma Divide

    Experts say that the diploma divide can’t be decoupled from the public confidence crisis, and that both have implications for the intensifying debate over, and presidential communication about, higher education’s value—especially in this political moment.

    More on the Survey

    Inside Higher Ed’s 2025 Survey of College and University Presidents was conducted with Hanover Research starting in December and running through Jan. 3. The survey included 298 presidents of two- and four-year institutions, public and private, for a margin of error of 5 percent. Download a copy of the free report here, and check out reporting on the survey’s other findings, including what presidents really think about faculty tenure and student mental health, and their expectations for the second Trump administration.

    On Wednesday, March 26, at 2 p.m. Eastern, Inside Higher Ed will present a webcast with campus leaders who will share their takes on the findings. Register for that discussion here.

    “Presidents should be making very clear and very concrete what the practical benefits of their university are, not just for the students that attend that university but for the community, the state at large,” said Joshua Zingher, an associate professor of political science and geography at Old Dominion University who studies elections and political behavior, including the diploma divide. “Thinking about the long-term development of the U.S. as a science power or a technology power is very much a story of the university.” He noted that football games at the University of Iowa, in his home state, pause after the first quarter so that fans can wave to patients in the campus children’s hospital—an example of how society depends on thriving colleges and universities, and how cuts to university research and other funding have ripple effects.

    Matt Grossman, professor of American politics and public policy and director of the Institute for Public Policy and Social Research at Michigan State University, who co-authored the 2024 book Polarized by Degrees: How the Diploma Divide and the Culture War Transformed American Politics, agreed there is reason for presidents to be concerned about the diploma divide, in that the “analogies are not great.” Just think of the politically polarized trust in so-called mainstream media, an institution in which both Democrats and Republicans were once largely confident.

    But whereas Zingher said that presidents might have to “take a position” at some point, even if many loathe being seen as political figures, Grossman pointed to existing public polling linking declining confidence to concerns about ideological bias within institutions, at least among Republicans. So Grossman said he was surprised by how few presidents in IHE’s annual survey most attribute declining trust to concerns about ideological bias (11 percent). About double that share say concerns about ideological bias are very or extremely valid (22 percent).

    Grossman explained that higher education has always been culturally liberal, but as social and cultural issues become more central to how people vote, it’s harder for institutions to “be above the fray.” Indeed, higher education is now a wedge issue. As for how campus leaders should respond to the diploma divide, Grossman said, “The first step would be a realization that they know that they are facing these complaints.”

    Presidents of private nonprofit institutions are somewhat more likely than their public counterparts to express the highest level of concern about the divide’s impact, including on higher education in general. Region also appears to matter, with presidents in the South least likely to worry about the divide. Regarding its impact on American democracy, for example, some 45 percent of presidents in the South are very or extremely worried, versus 62 percent of those in the Midwest, 73 percent of those in the West and 75 percent in the Northeast.

    The widening diploma divide means that voters without a college degree are increasingly likely to vote Republican and those with a degree are increasingly like to vote Democratic. With the Republican Party growing more critical of higher education, this has real consequences for college and university missions and budgets.

    But Keith Curry, president of Compton College and chief executive of the Compton Community College District, emphasized that educating students, including about voting, transcends politics: “It’s important that as leaders we’re bipartisan, and to focus on helping students register to vote and participate in the [democratic] process. They have to understand the issues and how to gather the information. They make their own decisions.”

    For what it’s worth, faculty members in a fall poll by IHE and Hanover overwhelmingly said that they planned to encourage students to vote in the 2024 election. But just 2 percent planned to tell students to vote for a particular candidate or party.

    Jay Akridge, trustee chair in teaching and learning excellence, professor of agricultural economics and former provost at Purdue University, offered a slightly different take. Calling the diploma divide “concerning,” he said it might “make higher ed think more about students with parents who did not go to college and how to better serve this group of first-generation students.”

    The Value Debate

    If not concerns about ideological bias, to what do presidents most attribute declining public confidence in higher education?

    From a list of survey options, the plurality (49 percent) cite concerns about the value of a college education and/or whether college is worth it. A less common choice: concerns about lack of affordability, including high tuition (18 percent). And very few presidents point to concerns about whether colleges are adequately preparing students for the workforce (7 percent).

    Some differences emerge by institution type, with public presidents more likely to cite concerns about whether college is worth it than their private nonprofit peers (54 percent versus 43 percent, respectively). But presidents of private nonprofits are somewhat more likely to blame concerns about affordability (22 percent versus 15 percent of public institution presidents).

    As for whether presidents think that such concerns are actually founded, half say that concerns about affordability are very to extremely valid, with presidents at public institutions (57 percent) significantly more likely to say so than those at private nonprofits (39 percent).

    And while very few presidents over all (1 percent) most attribute declining public confidence in higher education to concerns about equity, including access and outcomes for historically underrepresented groups, a quarter (26 percent) think that such concerns are highly valid. The same goes for higher education being disconnected from society (24 percent say this is highly valid)—something that’s arguably linked to the diploma divide, as well.

    Just 15 percent of presidents say the value question is highly valid. Some 40 percent say it’s not at all valid, while an additional 46 percent rate it as somewhat or moderately valid.

    In IHE’s 2024 Survey of College and University Chief Business Officers with Hanover, 94 percent of CBOs somewhat or strongly agreed that their institution offers good value for what it charges for an undergraduate degree. Just 9 percent of CBOs said their institution charges too much for an undergraduate degree.

    As for the student perspective, in IHE’s 2024 Student Voice survey series, most current two- and four-year students agreed that they’re getting a valuable education. But they were much less likely to agree that their college was affordable.

    Martha Snyder, partner at HCM Strategists, says the education firm’s own U.S. polling and other research has found a general, even bipartisan belief “that education beyond high school in some form or fashion is necessary and important for longer-term economic viability, prosperity and longer-term job security.” But—similar to the Student Voice findings—the “disconnect tends to be in accessibility and affordability.” That is, even as Americans may understand the long-term value of higher education, it is undercut by the immediate challenges of paying for it—especially when weighed against the opportunity cost of not working, or perhaps not working as much, while pursuing a degree.

    Snyder says this also points to a need for institutional transparency on cost of attendance and for better presidential communication as to why higher education works the way it does.

    “Think about the notion of a credit hour, right? The complex way that pricing happens is not easily understood by students and families. And even though net price has fallen, well, what is net pricing?” she said. “So there’s another disconnect in how we are communicating the information we’re providing to individuals about the opportunities, about the pathways and about what the end result is, in terms of career opportunities and career advancement.”

    Akridge, of Purdue, also noted the gap between the relatively large share of presidents who think concerns about the value of a degree are driving declining public confidence and the relatively small share who point to concerns about whether or not colleges are adequately preparing students for the workforce, as these two points are connected. Moreover, he said, there “are plenty of valid questions raised by employers about whether or not college graduates are ready for the work world.”

    In just one example, a recent survey of U.S. employees and human resources leaders by Hult International Business School found that 85 percent of recent graduates wish their college had better prepared them for the workplace, and 75 percent of HR leaders say most college educations aren’t preparing people at all for their jobs. There’s a lot to mine here‚ some of it probably generational (Gen Z employees aren’t necessarily mangers’ favorites, and they have their own expectations about work).

    Employer-led skills training has long been on the decline, as well. In any case, Akridge said that given employer perceptions about lack of preparation, “presidents are missing an opportunity—the so-called skills gap is an issue they can take action to close. And this is an issue where such actions will be well received by the public and will make a great story to tell.”

    Akridge and David Hummels, professor of economics and dean emeritus at Purdue, last fall launched “Finding Equilibrium: Two Economists on Higher Ed’s Future,” a Substack newsletter seeking to inform the value conversation. It has offered a number of ideas for improving the career readiness of college graduates, including elevating teaching and learning as a priority through curricular and co-curricular design, innovation and delivery; rethinking organizational structures and student support with a focus on career readiness; and strengthening connections and feedback loops with employers. Akridge and Hummels have also written about how the economic case for college remains strong and how the price students actually pay to attend college has fallen.

    Hummels told Inside Higher Ed that presidents are especially well positioned to share this kind of information with the public, to address the value debate head-on: “They are not passive actors. They need to get out in their communities and around their states, talking to high schools and chambers of commerce and the like, making the case that college is affordable with grant aid. That the return on college is large and positive when you take challenging courses of study and make the most of co-curricular opportunities.”

    The big asterisk here is that completion rates hover in the mid–60 percent range for four-year institutions. Students pursuing more expensive college options but moving into lower-wage jobs is another problem. So it’s also “clear higher ed does not work for everyone,” Akridge said. “We don’t create value for all students.” And how to get better remains “an essential question.”

    More on Affordability—and the Diploma Divide

    Curry, president of Compton College, said he has no doubts about higher education’s value, but that affordability is a highly valid concern at his institution.

    “We have students who are thinking about, ‘Do I buy a book for math class, or do I get food?’ They have to make some real decisions based off of their current finances about to going to college. It is not just the tuition cost. It is the total cost of education—what does that look like?”

    Similarly, students are weighing the cost of working versus going to college. This means that they have to be able to see higher education’s value in real time, Curry said. One way the college is helping students understand this is with program maps that list careers, salaries and other opportunities connected to various areas of study.

    For Hummels, affordability also points right back to the diploma divide in terms of future funding for higher education. If a majority of voters without a college education vote for one party and express a growing conviction that college is not worth it, he said, “then it becomes easier to cut back on Pell Grants, on subsidized student loans, on state support for universities.”

    The impacts of these cuts would be felt most strongly by lower-income and lower-education households, he continued, and “the lack of support becomes a self-fulfilling prophecy. College will become out of reach for these households.”

    Source link

  • Six ways to build trust between college presidents and students

    Six ways to build trust between college presidents and students

    A May 2024 Student Voice survey found 28 percent of college students say they have “not much trust” in their president and other executive-level officials, which was 18 percentage points higher than students’ distrust in professors and 13 percentage points higher than their trust in academic department leaders.

    An additional 19 percent of students said they were not sure if they trust their president, for a total of 52 percent of students indicating they have at least some trust in their campus executives.

    Students at private nonprofit institutions were mostly likely to say they did not have much trust in their president (48 percent) compared to their public four-year peers (30 percent) or those at two-year institutions (18 percent).

    “Trust is in very short supply on campuses. We do not see deeply trusting environments on campus very quickly,” said Emma Jones, executive vice president and owner of higher education consulting group Credo, in a Jan. 29 webinar by the Constructive Dialogue Institute. “By and large, I find campus leaders to have incredibly trustworthy behavior … but they are not trusted in their environments.”

    Institutional leaders can employ a variety of strategies and tactics to gain greater trust.

    Creating a foundation: A 2024 report from the American Council on Education found presidents are in agreement that trust building is a key competency for being a campus leader. Presidents told researchers they need to be present with their constituents, create opportunities for various stakeholders to share their views on issues related to the institution and surround themselves with diverse voices, according to the report.

    In the webinar, experts shared what they believe helps build trust between executive-level administrators and the students they serve.

    • Demonstrate care. Humanity is a key factor in trust, in which a person recognizes the uniqueness of each person and builds relationships with them, Jones explained. During this present age, it is particularly important for campus leaders to see and acknowledge people for their humanity.
    • Watch your tone. Generic or trite messages that convey a lack of empathy do not build trust among community members, said Darrell P. Wheeler, president of the State University of New York at New Paltz. Instead, having transparent and authentic communication, even when the answer is “I don’t know,” can help build trust in a nebulous period of time, Jones said.
    • Engage in listening. “People want you to be compassionate, but they really want to have their own space at times to be able to express where they are [and] not for you to overshadow it by talking about yourself in that moment,” Wheeler said during the webinar.
    • Create space to speak with students. Attending events to listen to students’ concerns or having opportunities for students to engage in meetings can show attentive care, Victoria Nguyen, a teaching fellow at Harvard’s Graduate School of Education, told Inside Higher Ed.
    • Foster healthy discourse. While presidents should strive to be trusted among their community members, too much trust can be just as destructive as too much distrust, Hiram Chodosh, president of Claremont McKenna College in California, said in the webinar.
    • Trust yourself. Earning trust requires self-trust, Chodosh said, so presidents should also seek to cultivate their own trustworthiness.

    Presidential Engagement: College presidents can step outside their offices and better engage with learners. Here are three paths they are taking.

    1. Being visible on campus. Creating opportunities for informal conversation can address students’ perceptions of the president and assist in trust building. Some presidents navigate campus in a golf cart to allow for less structured interactions with students. The University of South Alabama president participates in recruitment trips with high schoolers, introducing himself early.
    2. Hosting office hours. Wheeler of SUNY New Paltz hosts presidential office hours for students once a month in which they can sit down for coffee and chat with him. Students can sign up with a QR code and discuss whatever they feel called to share. At King’s University in Ontario, the dean of students hosts drop-in visits across campus, as well.
    3. Give students a peek behind the curtain. Often, colleges will invite students to participate as a trustee or a board member, giving them a voice and seat at the table. Hood College allows one student to be president for a day and engage in ceremonial duties and meetings the president would typically hold.

    We bet your colleague would like this article, too. Send them this link to subscribe to our weekday newsletter on Student Success.

    Source link

  • There is declining trust in Australian unis. Federal government policy is a big part of the problem

    There is declining trust in Australian unis. Federal government policy is a big part of the problem


    As we head towards the federal election, both sides of politics are making a point of criticising universities and questioning their role in the community.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • Survey gauges whom college students trust most

    Survey gauges whom college students trust most

    Undergraduates’ level of trust in their institution has been positively linked to individual student outcomes, as well as the broader institutional culture and reputation. So trust matters. And a new analysis of data from Inside Higher Ed’s annual Student Voice survey with Generation Lab shows which groups of campus employees students trust the most—and least—to promote an enriching experience.

    Asked to rate their level of trust in the people in various roles across campus to ensure that they and other students have a positive college experience, nearly all students have some (43 percent) or a lot (44 percent) of trust in professors. This is consistent across institution size, classification (both two-year and four-year) and sector, though students at private nonprofit institutions are somewhat more likely than their peers at public institutions to say they have the highest level of trust in their professors (51 percent versus 42 percent, respectively).

    Methodology

    Nearly three in 10 respondents (28 percent) to Inside Higher Ed’s annual Student Voice survey, fielded in May 2024 in partnership with Generation Lab, attend two-year institutions, and closer to four in 10 (37 percent) are post-traditional students, meaning they attend two-year institutions and/or are 25 or older. The 5,025-student sample is nationally representative. The survey’s margin of error is 1.4 percent.

    Other highlights from the full survey and from follow-up student polls on key issues can be found here, while the full main survey data set, with interactive visualizations, is available here. In addition to questions about academic life, the main annual survey asked questions on health and wellness, the college experience, and preparation for life after college.

    Trust in professors is also relatively consistent across a swath of student characteristics, including gender, household income level and even political affiliation, with 47 percent and 44 percent of Democratic- and Republican-identifying students, respectively, having a lot of trust in them. By race, however, Black students (32 percent) are less likely to say they have a lot of trust in professors than are white (47 percent), Asian American and Pacific Islander (42 percent), and Hispanic students (41 percent).

    Academic advisers come next in the list of which groups students trust a lot (36 percent), followed by campus safety and security officers (32 percent). The trust in security is perhaps surprising, giving heightened concerns about overpolicing in the U.S., but some general public opinion polling—including this 2024 study by Gallup—indicates that confidence in policing is up year over year. That’s as confidence in other institutions (including higher education) remains at a low. In a 2022 Student Voice survey, undergraduates were about equally likely to have a lot of trust in campus safety officers.

    Toward the bottom of the list of campus groups students trust a lot is financial aid staff (23 percent). This finding may be influenced by the tenor of national conversations about college costs and value, as well as last year’s chaotic Free Application for Federal Student Aid overhaul. Revised national data suggests that the FAFSA mess did not have the negative impact on enrollment that was feared. But another Inside Higher Ed/Generation Lab flash survey in 2024 found that a third of students disapproved of the way their institution communicated with them about the changes, with lower-income students especially likely to say this communication had been poor.

    Victoria Nguyen, a teaching fellow at Harvard’s Graduate School of Education and a program coordinator in the Office for Community Conduct at the university, recalls worrying about the financial aid process during her undergraduate years. “The issue is transparency and understanding … Did my scholarship go through? Are they going to reimburse me [for tuition paid]? … It’s not a lack of trust, but since there’s no transparency it feels as though financial aid staff does not have that care,” says Nguyen, who earned her bachelor of science degree in 2023.

    At the very bottom of the trust hierarchy are presidents and other executive-level college and university leaders, with just 18 percent of students expressing a lot of trust in this group. It’s been a tough few semesters for college leaders, with presidents, in particular, in the hot seat—including before Congress—over their responses to campus dynamics surrounding the war in Gaza. And those current tensions aside, the presidency appears to be getting harder and harder to hold on to, with average tenures shrinking.

    In any case, the newly released Student Voice data shows that students, too, may be losing faith in presidents and other senior leaders. These findings are relatively consistent across institution and student type.

    Closing the Presidential Trust Gap

    One recent study that sought to identify essential competencies for any modern college president ranked trust-building No. 1 in a list of seven that emerged from focus groups and surveys of presidents themselves: Some 96 percent emphasized that presidents need to behave “in a way that is trustworthy, consistent and accountable.”

    Jorge Burmicky, assistant professor of higher education leaders and policy studies at Howard University and co-author of that study, says that while this particular survey item on trust-building was drafted without a specific population in mind, presidents in focus groups emphasized the importance of building trust with students, as well as with faculty members. Participants’ ideas for building trust included bringing campus stakeholders into decision-making processes, minimizing surprises, supporting shared governance and showing consistency by aligning actions with personal and institutional values. Respondents also identified listening to and understanding the needs of various campus groups as a related, critical skill.

    Presidents “shared that it was important for them to maintain visibility on campus and that they often took time to visit with students as a way of staying connected to their campus,” Burmicky notes. He also encourages further study on what students—not just presidents—think about core competencies for presidents and means of building trust, including and perhaps especially around communication. Some presidents in his study shared feelings of frustration that students were not reading weekly or monthly presidential newsletters, and he advises that presidents develop trust in a way that works for their campus. Town hall–style gatherings might work in smaller settings, but not others, for instance.

    “There is clearly a perception gap between students and presidents on important issues such as trust-building and feeling heard,” he says. “Presidents ought to reach students where they’re at by using outlets that are relevant to their day-to-day lives,” such as social media or athletic events.

    Nguyen of Harvard would like to see college presidents showing care by attending more events where they can listen to students’ concerns, such as student organization meetings and workshops, or meetings of task forces that include students. Leaders’ “presence in the room matters so much more than they think,” she says.

    Tone and authenticity are additional considerations: Generic messages “do not resonate with most people as they lack empathy, as expressed by our participants,” says Burmicky.

    Nguyen adds that campus leaders should assess their communication to ensure they’re not “using tactics from 20 years ago that don’t match our student population anymore.”

    Faculty ‘Trust Moves’

    Another study published last month shed new light on the concept of student-faculty trust, seeking to better understand how students perceive its value. The study, involving hundreds of engineering students in Sweden, identified showing care and concern as the most important trust-building approach for professors. Teaching skills also mattered.

    Co-author Rachel Forsyth, of Lund University, explains that students “seem to want to have confidence that the teacher knows what they are talking about, is able to communicate their ideas and will attempt to build an effective relationship with them.” Student participants indicated that they could learn without trust, “but that the process felt more effective if it were present and that they had more options in terms of supporting that learning and extending their engagement with the materials.”

    The question of faculty trust is only gaining urgency with the rise of artificial intelligence–powered teaching tools, she adds.

    Peter Felten, executive director of the Center for Engaged Learning, professor of history and assistant provost for teaching and learning at Elon University, notes that prior research in this area has defined trust as both “students’ willingness to take risks based on their judgment that the teacher is committed to student success” (original study here) and as “the perception that the instructor understands the challenges facing students as they progress through the course, accepts students for who they are and cares about the educational welfare of students.”

    Felten says that his own research—completed with Forsyth and involving experienced faculty members teaching large science, engineering, technology and math courses—found there are four categories of “trust moves” faculty can make in their teaching:

    1. Cognition, or showing knowledge, skill and competence
    2. Affect, or showing care and concern for students
    3. Identity, or showing sensitivity to how identities influence learning and teaching
    4. Values, showing that they are acting on professional or cultural principles

    These trust moves, Felton says, include “not only what instructors do and say, but how they design their courses, how they assess students and more.”

    What do you do to build trust in your classroom or on your campus? Let us know by sharing your ideas here.

    Source link

  • A crisis of trust in the classroom (opinion)

    A crisis of trust in the classroom (opinion)

    It was the day after returning from Thanksgiving break. I’d been stewing that whole time over yet another case of cheating, and I resolved to do something about it. “Folks,” I said, “I just can’t trust you anymore.”

    After a strong start, many of the 160 mostly first-year students in my general education course had become, well, challenging. They’d drift in and out of the classroom. Many just stopped showing up. Those who did were often distracted and unfocused. I had to ask students to stop watching movies and to not play video games. Students demanded time to talk about how they were graded unfairly on one assignment or another but then would not show up for meetings. My beleaguered TAs sifted through endless AI-generated nonsense submitted for assignments that, in some cases, asked only for a sentence or two of wholly unsubstantiated opinion. One student photoshopped himself into a picture of a local museum rather than visiting it, as required by an assignment. I couldn’t even administer a simple low-stakes, in-class pen-and-paper quiz without a third of the students miraculously coming up with the same verbatim answers. Were they cheating? Somehow using AI? Had I simplified the quiz so much that these were the only possible answers? Had I simply become a victim of my own misplaced trust?

    I meant that word, “trust,” to land just so. For several weeks we had been surveying the history of arts and culture in Philadelphia. A key theme emerged concerning whether or not Philadelphians could trust culture leaders to put people before profit. We talked about the postwar expansion of local universities (including our own), the deployment of murals during the 1980s as an antigraffiti strategy and, most recently, the debate over whether or not the Philadelphia 76ers should be allowed to build an arena adjacent to the city’s historic Chinatown. In each case we bumped into hard questions about who really benefits from civic projects that supposedly benefit everyone.

    So, when I told my students that I couldn’t trust them anymore, I wanted them to know that I wasn’t just upset about cheating. What really worried me was the possibility that our ability to trust one another in the classroom had been derailed by the same sort of crass profiteering that explains why, for instance, so many of our neighbors’ homes get bulldozed and replaced with cheap student apartments. That in a class where I’d tried to teach them to be better citizens of our democracy, to discern public good from private profit, to see value in the arts and culture beyond their capacity to generate revenue, so many students kept trying to succeed by deploying the usual strategies of the profiteer—namely cheating and obfuscation.

    But could any of them hear this? Did it even matter? How many of my students, I wondered, would even show up if not for a chance to earn points? Maybe to them class is just another transaction. Like buying fries at the food truck and hoping to get a few extra just for waiting patiently?

    I decided to find out.

    With just a few sessions remaining, I offered everyone a choice: Pick Path A and I’d instantly give you full credit for all of the remaining assignments. All you had to do was join me for a class session’s worth of honest conversation about how to build a better college course. Pick Path B and I’d give you the same points, but you wouldn’t even have to show up! You could just give up, no questions asked, and not even have to come back to class. Just take the fries—er, the points—and go.

    The nervous chatter that followed showed me that, if nothing else, my offer got their attention. Some folks left immediately. Others gathered to ask if I was serious: “I really don’t have to come back, and I’ll still get the points?!” I assured them that there was no catch. When I left the room, I wondered if anyone would choose Path A. Later that day, I checked the results: Nearly 50 students had chosen to return. I was delighted!

    But how to proceed? For this to work I needed them to tell me what they really thought, rather than what they supposed I wanted to hear. My solution was an unconference. When the students returned, I’d ask each of them to take two sticky notes. On one they’d write something they loved about their college courses. On the other, they’d jot down something that frustrated them. The TAs and I would then stand at the whiteboard and arrange the notes into a handful of common themes. We’d ask everyone to gravitate toward whatever theme interested them most, gather with whomever they met there and then chat for a while about ways to augment the good and eliminate the bad. I’d sweep in toward the end to find out what everyone had come up with.

    So, what did I learn? Well, first off, I learned to temper my optimism. Although 50 students selected Path A, only 40 showed up for the discussion. And then about half of those folks opted to leave once they were entirely convinced that they could not earn additional points by remaining. To put it in starker terms, I learned that—in this instance—only about 15 percent of my students were willing to attend a regularly scheduled class if doing so didn’t present some specific opportunity for earning points toward their grades. Which is also to say that more than 85 percent of my students were content to receive points for doing absolutely nothing.

    There are many reasons why students may or may not have chosen to come back. The size of this sample though convinces me that college instructors are contending with dire problems related to how a rising generation of students understands learning. These are not problems that can be beaten back with new educational apps or by bemoaning AI. They are rather problems concerning citizenship, identity and the commodification of everything. They reflect a collapse of trust in institutions, knowledge and the self.

    I don’t fault my students for mistrusting me or the systems that we’ve come to rely on in the university. I too am skeptical about the integrity of our nation’s educational landscape. The real problem, however, is that the impossibility of trusting one another means that I cannot learn in any reliable way what the Path B students need for this situation to change.

    I can, however, learn from the Path A students, and one crucial lesson is that they exist. That is very good news! I learned, too, that the “good” students are not always the good students. The two dozen students who stuck it out were not, by and large, the students I expected to remain. I’d say that just about a third of the traditionally high-performing students came back without incentive. It’s an important reminder to all of us that surviving the classroom by teaching to only those students who appear to care is a surefire way to alienate others who really do.

    Some of what the Path A students taught me I’ve known for a long time. They react very favorably, for instance, to professors who make content immediate, interesting and personal. They feel betrayed by professors who read from years-old PowerPoints and will sit through those courses in silent resentment. Silence, in fact, appeared as a theme throughout our conversation. Many students are terrified to speak aloud in front of people they do not know or trust. They are also unsure about how to meet people or how to know if the people they meet can be trusted. None of us should be surprised that trust and communication are entwined. Thinking more fully about how they get bound up with the classroom will, for me, be a critical task going forward.

    I learned also that students appreciate an aspect of my teaching that I absolutely detest: They love when I publicly call out the disrupters and the rule breakers. They like it, that is, when I police the classroom. From my standpoint, having to be the heavy feels like a pedagogical failure. My sense is that a well-run classroom should prevent most behavior problems from occurring in the first place. Understandably, committed students appreciate when I ensure a fair and safe learning environment. But I have to wonder whether the Path A students’ appetite for schadenfreude reflects deeper problems: an unwillingness to confront difficulty, a disregard for the commonwealth, an immoderate desire for spectacle. Teaching is always a performance. But maybe what meanings our performances convey aren’t always what we think.

    By far, though, the most striking and maybe most troubling lesson I gathered during our unconference was this: Students do not know how to read. Technically they can understand printed text, and surely more than a few can do better than that. But the Path A students confirmed my sense that most if not a majority of my students were unable to reliably discern key concepts and big-picture meaning from, say, a 20-page essay written for an educated though nonspecialist audience. I’ve experienced this problem elsewhere in my teaching, and so I planned for it this time around by starting very slow. Our first reading was a short bit of journalism; the second was an encyclopedia entry. We talked about reading strategy and discussed methods for wrangling with difficult texts. But even so, I pretty quickly hit their limit. Weekly reading quizzes and end-of-week writing assignments called “connect the dots” showed me that most students simply could not.

    Concerns about declining literacy in the classroom are certainly not new. But what struck me in this moment was the extent to which the Path A students were fully aware of their own illiteracy, how troubled they were by it and how betrayed they feel by former teachers who assured them they were ready for college. During our discussion, students expressed how relieved they were when, late in the semester, I relented and substituted audio and video texts for planned readings. They want help learning how to read but are unsure of where or how to get it. There is a lot of embarrassment, shame and fear associated with this issue. Contending with it now must be a top priority for all of us.

    I learned so much more from our Path A unconference. In one of many lighthearted moments, for instance, we all heard from some international students about how “bonkers” they think the American students are. We’ve had a lot of laughs this semester, in fact, and despite the challenges, I’ve really enjoyed the work. But knowing what the work is, or needs to be, has never been harder. I want my students to see their world in new ways. They want highly individualized learning experiences free of confrontation and anxiety. I offer questions; they want answers. I beg for honesty; they demand points.

    Like it or not, cutting deals for points means that I’m stuck in the same structures of profit that they are. But maybe that’s the real lesson. Sharing something in common, after all, is an excellent first step toward building trust. Maybe even the first step down a new path.

    Seth C. Bruggeman is a professor of history and director of the Center for Public History at Temple University.

    Source link

  • How You Will Never Be Able to Trust Generative AI (and Why That’s OK) –

    How You Will Never Be Able to Trust Generative AI (and Why That’s OK) –

    In my last post, I introduced the idea of thinking about different generative AI models as coworkers with varying abilities as a way to develop a more intuitive grasp of how to interact with them. I described how I work with my colleagues Steve ChatGPT, Claude Anthropic, and Anna Bard. This analogy can hold (to a point) even in the face of change. For example, in the week since I wrote that post, it appears that Steve has finished his dissertation, which means that he’s catching up on current events to be more like Anna and has more time for long discussions like Claude. Nevertheless, both people and technologies have fundamental limits to their growth.

    In this post, I will explain “hallucination” and other memory problems with generative AI. This is one of my longer ones; I will take a deep dive to help you sharpen your intuitions and tune your expectations. But if you’re not up for the whole ride, here’s the short version:

    Hallucinations and imperfect memory problems are fundamental consequences of the architecture that makes current large language models possible. While these problems can be reduced, they will never go away. AI based on today’s transformer technology will never have the kind of photographic memory a relational database or file system can have. When vendors tout that you can now “talk to your data,” they really mean talk to Steve, who has looked at your data and mostly remembers it.

    You should also know that the easiest way to mitigate this problem is to throw a lot of carbon-producing energy and microchip-cooling water at it. Microsoft is literally considering building nuclear reactors to power its AI. Their global water consumption post-AI has spiked 34% to 1.7 billion gallons.

    This brings us back to the coworker analogy. We know how to evaluate and work with our coworkers’ limitations. And sometimes, we decide not to work with someone or hire them for a particular job because the fit is not good.

    While anthropomorphizing our technology too much can lead us astray, it can also provide us with a robust set of intuitions and tools we already have in our mental toolboxes. As my science geek friends say, “All models are wrong, but some are useful.” Combining those models or analogies with an understanding of where they diverge from reality can help you clear away the fear and the hype to make clear-eyed decisions about how to use the technology.

    I’ll end with some education-specific examples to help you determine how much you trust your synthetic coworkers with various tasks.

    Now we dive into the deep end of the pool. When working on various AI projects with my clients, I have found that this level of understanding is worth the investment for them because it provides a practical framework for designing and evaluating immediate AI applications.

    Are you ready to go?

    How computers “think”

    About 50 years ago, scholars debated whether and in what sense machines could achieve “intelligence,” even in principle. Most thought they could eventually sound pretty clever and act rather human. But could they become sentient? Conscious? Do intelligence and competence live as “software” in the brain that could be duplicated in silicon? Or is there something about them that is fundamentally connected to the biological aspects of the brain? While this debate isn’t quite the same as the one we have today around AI, it does have relevance. Even in our case, where the questions we’re considering are less lofty, the discussions from back then are helpful.

    Philosopher John Searle famously argued against strong AI in an argument called “The Chinese Room.” Here’s the essence of it:

    Imagine sitting in a room with two slots: one for incoming messages and one for outgoing replies. You don’t understand Chinese, but you have an extensive rule book written in English. This book tells you exactly how to respond to Chinese characters that come through the incoming slot. You follow the instructions meticulously, finding the correct responses and sending them out through the outgoing slot. To an outside observer, it looks like you understand Chinese because the replies are accurate. But here’s the catch: you’re just following a set of rules without actually grasping the meaning of the symbols you’re manipulating.

    This is a nicely compact and intuitive explanation of rule-following computation. Is the person outside the room speaking to something that understands Chinese? If so, what is it? Is it the man? No, we’ve already decided he doesn’t understand Chinese. Is it the book? We generally don’t say books understand anything. Is it the man/book combination? That seems weird, and it also doesn’t account for the response. We still have to put the message through the slot. Is it the man/book/room? Where is the “understanding” located? Remember, the person on the other side of the slot can converse perfectly in Chinese with the man/book/room. But where is the fluent Chinese speaker in this picture?

    If we carry that idea forward to today, however much “Steve” may seem fluent and intelligent in your “conversations,” you should not forget that you’re talking to man/book/room.

    Well. Sort of. AI has changed since 1980.

    How AI “thinks”

    Searle’s Chinese room book evokes algorithms. Recipes. For every input, there is one recipe for the perfect output. All recipes are contained in a single bound book. Large language models (LLMs)—the basics for both generative AI and semantic search like Google—work somewhat differently. They are still Chinese rooms. But they’re a lot more crowded.

    The first thing to understand is that, like the book in the Chinese room, a large language model is a large model of a language. LLMs don’t even “understand” English (or any other language) at all. It converts words into its native language: Math.

    (Don’t worry if you don’t understand the next few sentences. I’ll unpack the jargon. Hang in there.)

    Specifically, LLMs use vectors. Many vectors. And those vectors are managed by many different “tensors,” which are computational units you can think of as people in the room handling portions of the recipe. They do each get to exercise a little bit of judgment. But just a little bit.

    Suppose the card that came in the slot of the room had the English word “cool” on it. The room has not just a single worker but billions, or tens of billions, or hundreds of billions of them. (These are the tensors.) One worker has to rate the word on a scale of 10 to -10 on where “cool” falls on the scale between “hot” and “cold.” It doesn’t know what any of these words mean. It just knows that “cool” is a -7 on that scale. (This is the “vector.”) Maybe that worker, or maybe another one, also has to evaluate where it is on the scale of “good” to “bad.” It’s maybe 5.

    We don’t yet know whether the word “cool” on the card refers to temperature or sentiment. So another worker looks at the word that comes next. If the next word is “beans,” then it assigns a higher probability that “cool” is on the “good/bad” scale. If it’s “water,” on the other hand, it’s more likely to be temperature. If the next word is “your,” it could be either, but we can begin to guess the next word. That guess might be assigned to another tensor/worker.

    Imagine this room filled with a bazillion workers, each responsible for scoring vectors and assigning probabilities. The worker who handles temperature might think there’s a 50/50 chance the word is temperature-related. But once we add “water,” all the other workers who touch the card know there’s a higher chance the word relates to temperature rather than goodness.

    The large language models behind ChatGPT have hundreds of billions of these tensor/workers handing off cards to each other and building a response.

    This is an oversimplification because both the tensors and the math are hard to get exactly right in the analogy. For example, it might be more accurate to think of the tensors working in groups to make these decisions. But the analogy is close enough for our purposes. (“All models are wrong, but some are useful.”)

    It doesn’t seem like it should work, does it? But it does, partly because of brute force. As I said, the bigger LLMs have hundreds of billions of workers interacting with each other in complex, specialized ways. Even though they don’t represent words and sentences in any form that we might intuitively recognize as “understanding,” they are uncannily good at interpreting our input and generating output that looks like understanding and thought to us.

    How LLMs “remember”

    The LLMs can be “trained” on data, which means they store information like how “beans” vs. “water” modify the likely meaning of “cool,” what words are most likely to follow “Cool the pot off in the,” and so on. When you hear AI people talking about model “weights,” this is what they mean.

    Notice, however, that none of the original sentences are stored anywhere in their original form. If the LLM is trained on Wikipedia, it doesn’t memorize Wikipedia. It models the relationships among the words using combinations of vectors (or “matrices”) and probabilities. If you dig into the LLM looking for the original Wikipedia article, you won’t find it. Not exactly. The AI may become very good at capturing the gist of the article given enough billions of those tensor/workers. But the word-for-word article has been broken down and digested. It’s gone.

    Three main techniques are available to work around this problem. The first, which I’ve written about before, is called Retrieval Augmented Generation (RAG). RAG preprocesses content into the vectors and probabilities that the LLM understands. This gives the LLM a more specific focus on the content you care about. But it’s still been digested into vectors and probabilities. A second method is to “fine-tune” the model. Which predigests the content like RAG but lets the model itself metabolize that content. The third is to increase what’s known as the “context window,” which you experience as the length of a single conversation. If the context window is long enough, you can paste the content right into it…and have the system digest the content and turn it into vectors and probabilities.

    We’re used to software that uses file systems and databases with photographic memories. LLMs are (somewhat) more like humans in the sense that they can “learn” by indexing salient features and connecting them in complex ways. They might be able to “remember” a passage, but they can also forget or misremember.

    The memory limitation cannot be fixed using current technology. It is baked into the structure of the tensor-based networks that make LLMs possible. If you want a photographic memory, you’d have to avoid passing through the LLM since it only “understands” vectors and probabilities. To be fair, work is being done to reduce hallucinations. This paper provides a great survey. Don’t worry if it’s a bit technical. The informative part for a non-technical reader is all the different classifications of “hallucinations.” Generative AI has a variety of memory problems. Research is underway to mitigate them. But we don’t know how far those techniques will get us, given the fundamental architecture of large language models.

    We can mitigate these problems by improving the three methods I described. But that improvement comes with two catches. The first is that it will never make the system perfect. The second is that reduced imperfection often requires more energy for the increased computing power and more water to cool the processors. The race for larger, more perfect LLMs is terrible for the environment. And we may not need that extra power and fidelity except for specialized applications. We haven’t even begun to capitalize on its current capabilities. We should consider our goals and whether the costliest improvements are the ones we need right now.

    To do that, we need to reframe how we think of these tools. For example, the word “hallucination” is loaded. Can we more easily imagine working with a generative AI that “misremembers”? Can we accept that it “misremembers” differently than humans do? And can we build productive working relationships with our synthetic coworkers while accommodating and accounting for their differences?

    Here too, the analogy is far from perfect. Generative AIs aren’t people. They don’t fit the intention of diversity, equity, and inclusion (DEI) guidelines. I am not campaigning for AI equity. That said, DEI is not only about social justice. It is also about how we throw away human potential when we choose to focus on particular differences and frame them as “deficits” rather than recognizing the strengths that come from a diverse team with complementary strengths.

    Here, the analogy holds. Bringing a generative AI into your team is a little bit like hiring a space alien. Sometimes it demonstrates surprising unhuman-like behaviors, but it’s human-like enough that we can draw on our experiences working with different kinds of humans to help us integrate our alien coworker into the team.

    That process starts with trying to understand their differences, though it doesn’t end there.

    Emergence and the illusion of intelligence

    To get the most out of our generative AI, we have to maintain a double vision of experiencing the interaction with the Chinese room from the outside while picturing what’s happening inside as best we can. It’s easy to forget the uncannily good, even “thoughtful” and “creative” answers we get from generative AI are produced by a system of vectors and probabilities like the one I described. How does that work? What could possibly going on inside the room to produce such results?

    AI researchers talk about “emergence” and “emergent properties.” This idea has been frequently observed in biology. The best, most accessible exploration of it that I’m aware of (and a great read) is Steven Johnson’s book Emergence: The Connected Lives of Ants, Brains, Cities, and Software. The example you’re probably most familiar with is ant colonies (although slime molds are surprisingly interesting).

    Imagine a single ant, an explorer venturing into the unknown for sustenance. As it scuttles across the terrain, it leaves a faint trace, a chemical scent known as a pheromone. This trail, barely noticeable at first, is the starting point of what will become colony-wide coordinated activity.

    Soon, the ant stumbles upon a food source. It returns to the nest, and as it retraces its path, the pheromone trail becomes more robust and distinct. Back at the colony, this scented path now whispers a message to other ants: “Follow me; there’s food this way!” We might imagine this strengthened trail as an increased probability that the path is relevant for finding food. Each ant is acting independently. But it does so influenced by pheromone input left by other ants and leaves output for the ants that follow.

    What happens next is a beautiful example of emergent behavior. Other ants, in their own random searches, encounter this scent path. They follow it, reinforcing the trail with their own pheromones if they find food. As more ants travel back and forth, a once-faint trail transforms into a bustling highway, a direct line from the nest to the food.

    But the really amazing part lies in how this path evolves. Initially, several trails might have been formed, heading in various directions toward various food sources. Over time, a standout emerges – the shortest, most efficient route. It’s not the product of any single ant’s decision. Each one is just doing its job, minding its own business. The collective optimization is an emergent phenomenon. The shorter the path, the quicker the ants can travel, reinforcing the most efficient route more frequently.

    This efficiency isn’t static; it’s adaptable. If an obstacle arises, disrupting the established path, the ants don’t falter. They begin exploring again, laying down fresh trails. Before long, a new optimal path emerges, skirting the obstacle as the colony dynamically adjusts to its changing environment.

    This is a story of collective intelligence, emerging not from a central command but from the sum of many small, individual actions. It’s also a kind of Chinese room. When we say “collective intelligence,” where does the intelligence live? What is the collective thing? The hive? The hive-and-trails? And in what sense is it intelligent?

    We can make a (very) loose analogy between LLMs being trained and hundreds of billions of ants laying down pheromone trails as they explore the content terrain they find themselves in. When they’re asked to generate content, it’s a little bit like sending you down a particular pheromone path. This process of leading you down paths that were created during the AI model’s training is called “inference” in the LLM. The energy required to send you down an established path is much less than the energy needed to find the paths. Once the paths are established, traversing them seems like science fiction. The LLM acts as if there is a single adaptive intelligence at work even though, inside the Chinese room, there is no such thing. Capabilities emerge from the patterns that all those independent workers are creating together.

    Again, all models are wrong, but some are useful. My analogy substantially oversimplifies how LLMs work and how surprising behaviors emerge from those many billions of workers, each doing its own thing. The truth is that even the people who build LLMs don’t fully understand their emergent behaviors.

    That said, understanding the basic mechanism is helpful because it provides a reality check and some insight into why “Steve” just did something really weird. Just as transformer networks produce surprisingly good but imperfect “memories” of the content they’re given, we should expect to hit limits to gains from emergent behaviors. While our synthetic coworkers are getting smarter in somewhat unpredictable ways, emergence isn’t magic. It’s a mechanism driven by certain kinds of complexity. It is unpredictable. And not always in the way that we want it to be.

    Also, all that complexity comes at a cost. A dollar cost, a carbon cost, a water cost, a manageability cost, and an understandability cost. The default path we’re on is to build ever-bigger models with diminishing returns at enormous societal costs. We shouldn’t let our fear of the technology’s limitations or fantasy about its future perfection dominate our thinking about the tech.

    Instead, we should all try to understand it as it is, as best we can, and focus on using it safely and effectively. I’m not calling for a halt to research, as some have. I’m simply saying we may gain a lot more at this moment by better understanding the useful thing that we have created than by rushing to turn it into some other thing that we fantasize about but don’t know that we actually need or want in real life.

    Generative AI is incredibly useful right now. And the pace at which we are learning to gain practical benefit from it is lagging further and further behind the features that the tech giants are building as they race for “dominance,” whatever that may mean in this case.

    Learning to love your imperfect synthetic coworker

    Imagine you’re running a tutoring program. Your tutors are students. They are not perfect. They might not know the content as well as the teacher. They might know it very well but are weak as educators. Maybe they’re good at both but forget or misremember essential details. That might cause them to give the students they are tutoring the wrong instructions.

    When you hire your human tutors, you have to interview and test them to make sure they are good enough for the tasks you need them to perform. You may test them by pretending to be a challenging student. You’ll probably observe them and coach them. And you may choose to match particular tutors to particular subjects or students. You’d go through similar interviewing, evaluation, job matching, and ongoing supervision and coaching with any worker performing an important job.

    It is not so different when evaluating a generative AI based on LLM transformer technology (which is all of them at the moment). You can learn most of what you need to know from an “outside-the-room” evaluation using familiar techniques. The “inside-the-room” knowledge helps you ground yourself when you hear the hype or see the technology do remarkable things. This inside/outside duality is a major component that participating teams in my AI Learning Design Workshop (ALDA) design/build exercise will be exploring and honing their intuitions about with a practical, hands-on project. The best way to learn how to manage student tutors is by managing student tutors.

    Make no mistake: Generative AI does remarkable things and is getting better. But ultimately, it’s a tool built by humans and has fundamental limitations. Be surprised. Be amazed. Be delighted. But don’t be fooled. The tools we make are as imperfect as their creators. And they are also different from us.

    Source link