Tech with Trust: Designing for Legitimacy

2,556 words in this newsletter - about 10 minutes and 45 seconds to read.

Guest Editors: Karina Ricks and Ryan Parzick

It’s that time of year when things get a little spooky - and not just because of Halloween. Cities everywhere are staring down their own digital ghosts: mysterious algorithms, invisible sensors, and AI tools that seem to appear overnight. The trick (and the treat) is figuring out how to make these technologies work for people, not haunt them. This month, we explore how to design for legitimacy in a world where trust is the scarcest resource. From governments getting ahead of AI instead of hiding from it, to neighborhoods reclaiming ownership of their data, to new ways of making the “invisible” tech in public spaces visible and accountable - these stories share one clear message: the best way to keep civic innovation from turning into a fright show is to build trust from the start.

Trust is at the heart of what we do at Cityfi. We’ve seen how powerful it can be when communities and companies team up, not just to innovate, but to make sure technology actually works for the people living with it every day. We hope this issue gets you thinking about the invisible systems shaping our cities and how, together, we can keep building a future where tech earns its place by earning our trust.

Taking the Invisible and Making It Trusted: Cityfi Interviews Jacqueline Lu of Helpful Places

 
 

By Ryan Parzick

Since this month’s newsletter is about building trust with tech, we couldn’t have thought of a better person to interview around this theme. Jacqueline Lu, founder and CEO of Helpful Places, joined us for a conversation about her transition from a long-time career in public service to the private sector. It’s not as simple as that, though. Jacqueline explains her metamorphosis from her background in ecology and evolutionary biology to an “accidental technologist” where she focuses on open data and smart cities. In her private sector role, Jacqueline has had a prominent place in the creation of the open-source Digital Trust for Places and Routines (DTPR) standard and currently stewards and advances the adoption of this standard through her company, Helpful Places.

This conversation has been condensed and edited for clarity. To read the full conversation, including how AI fits into this conversation, please visit the Cityfi blog. Enjoy!

Ryan Parzick: Thank you so much for joining us for this conversation. Before we get into all of the interesting details of what you do, why you do it, and how and why it's important, let’s talk about you. Your background. The things that shaped where you are now. You started in the New York City Department of Parks and Recreation and now you run a private firm helping public organizations advance technology governance and transparency. How did that happen?

Jacqueline Lu: It's true. I am a recovering public servant. I worked in the Parks Department in New York City government for almost 20 years under several mayors, including Mayor Bloomberg. When I first started, geographic information systems were just beginning to take off in local government.

People would ask me, “Where are the parks?” and I would joke, “You know the gaps in the map between the roads? They’re probably in there.” I started in the forestry division and had a background in ecology and evolutionary biology, and I was really interested in how we could scientifically manage our urban forests. With a science background, I was thinking of this issue from a data perspective and started to realize that in city government there's all these amazing troves of data and “administrative exhaust” in things like work order systems and service requests that tell us a lot about city dynamics. That’s how I became an “accidental technologist!”

My public service career came to an end when I needed to move back home to Canada for family reasons. I didn't know what was next for me until I was encouraged to talk to Sidewalk Labs, an Alphabet company, which was part of the planning of a smart district in Toronto. I never thought I would end up in the private sector, but it was an opportunity to reimagine public realm technology from the ground up, without the challenges that are inherent in legacy systems, which was exciting.

The other part that was really interesting to me was that at that time, around 2018, Sidewalk Labs was at the center of a global debate about privacy, trust, and accountability in tech-enabled cities. Joining Sidewalk Labs gave me a chance to be a part of that larger conversation about technology in cities.

When the Toronto project was sunsetted in 2020, our team had already begun work on DTPR, an open source project designed to help people understand the invisible digital layer in cities. Because this layer is invisible to the built environment, people couldn't understand it or engage with it. We can have a lively debate about a bike lane, but not about the algorithm that prioritizes traffic at an intersection.

That’s the challenge ahead of us, how to make this digital infrastructure visible and trustworthy. And that’s the core of what we do at Helpful Places.

Ryan Parzick: That's a perfect segue to my next question which is about your “now.” You founded Helpful Places and now steward DTPR. Can you share more about what your organization does?

Jacqueline Lu: I founded Helpful Place in 2020 to advance the DTPR open-source standard. We started as a question:  how can we help people see and engage with the digital layer in the built environment–those sensors that collect data that feed AI systems and algorithms that are otherwise invisible? Over the past five years, we have learned a lot about helping organizations foster community trust in emerging technologies.

Today, our core work is still stewarding and maintaining DTPR, which provides what some call a “nutrition label for technology.” It offers a consistent visual and data language to explain digital systems like sensors and AI by showing what data is collected, who is responsible, and what decisions it informs.

We also developed a software platform called Clarable, which helps organizations manage internal processes for stakeholder transparency for technology. Around those two offerings, we provide consulting and implementation services that help cities and innovation districts build the organizational behaviors needed to enable trust.

Ryan Parzick: Let’s nerd out a bit. Can you explain to our readers how DTPR actually works?

Jacqueline Lu: When we created DTPR at Sidewalk Labs, we started with the question:  How would people know what technology is at work around them?

We looked for inspiration in precedents of visual language to simplify and communicate complex concepts. Concepts like the Creative Commons logos and nutrition labels. Using those two precedents, we convened experts and ran co-design sessions to determine the most important concepts people should know. There were four things that people wanted:

  1. Purpose - What is this system for? What is it doing?

  2. Accountability - Who put it here? Who is accountable for it?

  3. Personal Visibility - Can I be seen or identified?

  4. Follow-Up - How can I learn more or ask questions?

We turned those insights into a data standard, a visual icon system, and design patterns for signage in public spaces. It solves the problem of having to read a 15-page privacy impact assessment that doesn't have a clear structure.

As AI has increasingly become part of the conversation, I like to think of things like sensors as food collectors for AI enabled systems. We now have a new beta version of the DTPR standard to help ensure that it can address what I see as a parallel need for useful AI transparency.

Ryan Parzick: This has been so interesting and fun. I have a thousand other questions I could probably ask you! Thank you so much for your time, your great answers, and your thoughtfulness. I think our readers will be very excited, and maybe surprised, to learn that tools like DTPR exist to bring transparency to civic tech.

AI Proactive Policy vs Benign Neglect

By Ryan Parzick

Cities are already living with artificial intelligence, whether they admit it or not. The real question is how they’ll engage with it: deliberately, with foresight and accountability, or accidentally, through neglect and inertia.

Too many governments fall into one of two traps. On one side are the “thou shalt not use AI” jurisdictions, where fear of risk turns into paralysis. On the other are those quietly letting AI seep into daily operations without policy, oversight, or training. Both approaches - over-constraint and benign neglect - undermine legitimacy.

The smarter path is to get in front of AI. Approach it with curiosity and discipline, not panic or passivity. That means acknowledging AI is already part of the civic toolkit and deciding, intentionally, how to use it for public good.

Cities taking this approach see AI as a governance issue, not just a tech one. They start small but visibly - publishing what they’re testing, asking how algorithms affect residents, and training staff so tools enhance judgment rather than replace it.

San Francisco recently gave 30,000 city employees access to Microsoft Copilot, an AI assistant built into daily software. Rather than banning it or letting it spread informally, the city paired rollout with staff training and clear disclosure rules. The city’s new Generative AI Guidelines strike a balance between responsibility and flexibility, giving employees room to use generative AI tools while maintaining clear standards for ethical use. Under these rules, staff remain fully responsible for any content they produce or share, no matter whether it’s written by hand or by machine. Kansas City will start using AI to sort and route 311 requests more efficiently while studying how automation affects neighborhood equity.

None of these cities has it all figured out, but they’re learning in public. The key is transparency. San Francisco’s success depends on responsible scaling while Kansas City must ensure efficiency doesn’t deepen inequities.

Contrast that with cities that do nothing. Even when leaders avoid AI entirely, the technology still creeps in, baked into vendor software or analytics tools. Without guidance, staff use systems blindly. Without training, they trust the outputs. Without transparency, residents lose faith. And when something goes wrong - a biased model, a bad procurement, a privacy breach - the fallout erodes trust faster than any efficiency gain can rebuild it.

Neglect isn’t neutral, it’s a policy choice. Waiting for perfect clarity before engaging with AI ensures the technology will shape city operations from the outside in, not the inside out.

The cities doing this well share a few habits: they treat AI governance as an extension of ethics and transparency, publish plain-language summaries of what they’re using, and create safe sandboxes for experimentation. Chattanooga even built a “prompt library” for staff using generative AI tools, helping define what “responsible” looks like day to day.

There are also cautionary tales. Some U.S. cities deploy predictive systems in housing or policing with little public disclosure. These examples remind us that trust can’t be added later. It has to be designed in from the start.

Good AI policy isn’t about saying yes or no, it’s about saying yes … but carefully. Start with governance, not gadgets: define the problems to solve, the values to protect, and the oversight to maintain. Invest in people as much as technology, and engage residents not just as data points but as co-designers of responsible use.

Cities don’t need to move fast and break things. They need to move wisely and explain things. AI is already woven into daily operations, from call centers to traffic systems. The question now is whether governments will steer that reality or let it steer them.

“Benign neglect” may feel safe, but it’s just comfort disguised as caution. The cities that thrive will turn fear into literacy and experimentation into trust. Getting in front of AI isn’t about chasing the future, it’s about safeguarding the present, with integrity as your operating system.

Because in the end, trust isn’t a byproduct of technology. It’s the platform everything else runs on.

Rethinking Who Owns Urban Data

Photo Credit: Chelsea Lawson at New America's Civic AI Summit

By Chelsea Lawson

Who controls data? Cities and communities are experimenting with data cooperatives that let residents collectively own and manage their information. This sort of legal, technical, and cultural shift could redefine civic data governance, with meaningful implications for city-resident trust.

In a report just released by New America titled Making AI Work for the Public: An ALT Perspective, one of the recommendations for those with a stake in AI and the public good is to incubate such community-controlled data infrastructure. As the report recommends,, civic AI is an ecosystem with many important players who work together - “government serves as an enabler, philanthropies buffer risk, universities evolve our understanding, and nonprofits ground us in real-world experience.”

An example in my home city of Boston is a partnership between RethinkAI and Talbot Norfolk Triangle Neighborhood United, a community organization. To gain more ownership and independence over the data about the neighborhood and ensuing decision-making, the partnership created a local large language model, or LLM, called “On the Porch.” It is trained on structured data such as 311, traffic violations, 911 calls, and unstructured data such as planning documents and transcripts of local meetings. The result is a conversational tool that anyone in the community can use to inquire about what’s going on in their neighborhood and share things they might be concerned about. After a conversation, the bot provides a high-level summary, complete with any resources mentioned for follow-up. The City of Boston has been engaged with the teams and a major theme of the report launch conference was how to scale pilots like this.

The public realm is also home to a vast array of sensors, cameras, wifi networks, etc that collect data to be used for more than just AI applications. These provide more avenues for participatory stewardship. For example, in Barcelona, the Citizen Science Data Governance pilot (part of a 3-pronged DECODE pilot program conducted in 2018 and 2019) was run to help enable communities to support Internet of Things (IoT) data gathering while allowing them to control what information was shared, with whom, and under which conditions. This pilot used environmental sensors, located inside and outside the homes of participants, to detect noise and pollution levels. The DECODE technology allowed data to be coded and shared anonymously.. Or thinking again more locally, the 2024 City of Boston Annual Surveillance Technology Report discusses how some neighborhood groups own and operate video cameras  in order to assist the police with solving crime in the business community and their home neighborhoods.

At Cityfi, we thrive in the gray area of shared ownership and the questions and challenges it opens up. As much as AI represents a fundamental shift in how government works, it also rests on practices we have been honing since we started, like community engagement and long term strategic planning.

The Cityfi Cluster #7

By Ryan Parzick

Ever play the New York Times Connections game? Here is our own Cityfi version for you to play! If you haven’t played before, that’s OK. The rules are simple, but hopefully, solving the game is not! The challenge: group the 16 words into 4 groups of 4. Each group has a unifying theme. You get one shot, so make it count. If you think you have the correct solution, please email us with your 4 groups (you must provide the unifying theme) and the 4 words contained in each theme. An example of a unifying theme could be “Types of Animals” containing the words:  “dog,” “cat,” “rabbit,” “deer.” We’ll keep score throughout the year to crown the 2025 Cityfi Cluster Champion. The answer will be posted in our next newsletter. If you want your score to count, please submit your answer before November 21st.

Last month’s solutions are:

  • Words containing “fall”:  Fallacy, Pitfall, Waterfall, Downfall

  • Words associated with Design-Thinking:  Prototype, Empathy, Ideate, Iterate

  • Synonyms for “Collaborative”:  Cooperative, Joint, Shared, Collective

  • Expressions when someone has an idea:  Eureka, Aha, Bingo, Voilà

Where in the World is Cityfi?

Check out where Cityfi will be in the upcoming weeks. We may be speaking at conferences, leading workshops, hosting events, and/or actively engaging in collaborative learning within the community. We would love to see you.

Smart City Expo/Tomorrow.Mobility World Congress - Barcelona, ES - November 4th - 6th

The two conferences bring together leading experts from cities, the mobility industry, and European institutions to discuss how harmonized data standards can drive safe, sustainable, and user-centric urban mobility in cities across Europe.

On November 6th, Cityfi’s Evan Costagliola and Affiliate Gemma Schepers will be facilitating an invite-only event hosted by the Open Mobility Foundation, POLIS, and EIT Urban Mobility at Tomorrow.Mobility World Congress 2025. The event will discuss how to advance open source and city-centered mobility and public space management data standards and tools. Reach out to Evan to learn more.

Chicago City Builders Book Club - Chicago, IL - November 5th

If you live in Chicago, check out the Cityfi sponsored Chicago City Builders Book Club typically meeting up around every 6 weeks.  Principal Marla Westervelt co-hosts this book club where they bring together professional city builders to discuss Chicago-centric books that explore local urban and political issues. The upcoming meetup will discuss Grafters and Goo Goos: Corruption and Reform in Chicago, 1833–2003 by James L. Merriner. Check out their LinkedIn page for updates.

CoMotion LA 2025 -  Los Angeles, CA - November 12th - 13th

Connect with the companies building tomorrow’s transportation, the investors backing them, and the city leaders making it happen. Sounds pretty cool, right? Well, to make things even sunnier in LA, you get the added bonus of getting to see Partner Karina Ricks in person. Reach out to her if you want to say hi!

What We’re Reading

Articles handpicked by the Cityfi team we have found interesting:

All Things Cityfi

Your guide to our services, portfolio of client engagements, team, and…well, all things Cityfi.

Subscribe to our Email Newsletter
Subscribe to our LinkedIn Newsletter
Next
Next

Putting Ideas Into Action