Foreign Influence Risk Index (FIRI)
Product UX/UI Design Lead
Company: Seerist
Client: Deloitte
In Collaboration with: Chief Technology Officer, Data Science Team, Dev Team, Deloitte Team
Acknowledgments
Seerist's Foreign Influence Risk Index (FIRI) achieved "Awardable" status through the Pentagon Chief Digital and Artificial Intelligence Office’s (CDAO) Tradewinds Solutions Marketplace. The Tradewinds Solutions Marketplace has recognized the immense potential of FIRI in capturing, tracking, and analyzing great power competition activities.
Context
In collaboration with Deloitte, Seerist developed an innovative product to assess the Foreign Influence Risk Index (FIRI) of countries, evaluating Political, Military, Economic, Social, Information, and Infrastructure (PMESII) influences. The product aimed to help users understand a country’s global influence and its impact on specific nations.
As the first UX project of my career, I took on the role of lead and sole Product UX/UI Designer, where I wrote user testing scripts, conducted tests, and gathered feedback to continuously refine the prototype throughout the year-long development process.
Each week, I conducted stakeholder interviews, including subject matter experts to gather valuable user feedback, which I then analyzed and incorporated into each iteration. This ongoing feedback loop addressed usability concerns and secured stakeholder buy-in, ensuring the product aligned with both user needs and business objectives.
Initial Sketch
The original concept for FIRI was a dashboard; however, to effectively accommodate all data and view types, the design evolved to display more comprehensive information for users.
Challenges
First UX Project of My Career
Challenge: FIRI was not only my first project as the lead and sole Product UX/UI Designer, but also the first UX project of my career. While both users and stakeholders at Seerist and Deloitte were satisfied with the final product, I recognize that, in hindsight, there are design and usability aspects I would approach differently now, three years into my career.
Solution: I’ve provided conceptual reimagined wireframes that more closely adhere to UX Heuristics, and would align more closely with recently created Seerist platform UI guidelines, which I believe is important for consistency across products.
No UI Guidelines for Coloring
Challenge: About a month into the project, the external marketing team provided brand colors for Seerist. However, there were no established UI guidelines for their application, and no specific color assignments had been made for the Seerist product. While the Chief Technology Officer noted that FIRI users might not overlap with Seerist users, I was concerned about maintaining design consistency between the two products. During user testing, I discovered that there was indeed some overlap in the user base.
Solution: To address this, I developed a set of UI guidelines for FIRI that prioritized usability. For example, I used Seerist's Lime Green to highlight key user flows, such as navigation and buttons, ensuring usability in user navigation.
Assigning Brand Colors for Map
Challenge: Selecting brand colors for the map presented a challenge, as I needed a palette of five distinct colors to represent both country regions and article dots. These color ranges had to provide sufficient contrast, both within each range and between the two elements.
Solution: Early in my career, I prioritized ensuring adequate contrast not only within the country color ranges but also between the country colors and article dots. Recognizing that the provided palette was insufficient, I consulted with the marketing team, who authorized the use of additional shades within the tertiary brand colors.
In retrospective, if I revisited this project today, I would incorporate the Web Content Accessibility Guidelines (WCAG) to ensure optimal color contrast and accessibility.
Development in Streamlit
Challenge: FIRI was developed in a separate tool, Streamlit, chosen by the Chief Technology Officer for its alignment with future product goals. However, this decision introduced limitations in map functionality and component integration. Additionally, due to the delayed assignment of an engineer, early wireframe designs had to be iterated to accommodate these Streamlit constraints.
Solution: I collaborated closely with the engineer to understand Streamlit's limitations and developed optimal design solutions that balanced usability within these constraints.
First Iteration: Low Fidelity
Low fidelity wireframes were created based on feedback received from weekly meeting interviews with stakeholders from Seerist and Deloitte.
Map View
Graph View
Data View with Implemented Early Stakeholder Interview Feedback
Early stakeholder interviews at Seerist led to implementation of design solutions such as coloring the “Change in Influence” column to draw the user’s attention on important changes in foreign influence risks.
Stakeholder Feedback
During weekly stakeholder meetings with Seerist and Deloitte, I gathered feedback, integrated it into the design, and presented an updated interactive prototype the following week. Given the volume of feedback, Seerist stakeholders would prioritize and filter the input based on feasibility, particularly for the MVP.
Key feedback during this iterative phase, crucial to the project’s context, included:
Higher ups at Deloitte do not like the radar graph.
Graph feedback: what graphs are needed and which are not.
Data Analysts provided useful feedback specifically that only bar, line, and pie charts are useful.
Data needed further back than modals provided
Heatmap over time to compare two countries
Feature for personalized polygons of interest as Seerist
Click on country would cause zoom in
Ability to click on a point in the line graph to understand what event caused that spike or drop
Time filter options: day, week, month, year
Filter persistance across all three views (Map, Graph, Data):
Time filter would be useful for map articles and graphs (not data view)
Influence scores needed on Map
Ability to see which article is selected on Map
PMESII tags for article cards
Keeping Streamlit default filters for Graphs and Data table: it was felt by stakeholders that this default feature provided more to the user.
I was concerned users would be confused by these component features as they didn’t align with the intended use of our product, but I was unable to gain stakeholder buy in on removing these because it would take more time to override in code.
Color connotation for Influence scoring
Do not use Green as it conveys “good”
Do not use Red at it conveys “bad”
What are the color connotations for those assigned for PMESII?
I delved deeply into this during stakeholder and user interviews, with resulting feedback being that there was no issue and no suggestions on the colors I assigned for PMESII.
Data Visualization Design Exploration
Within just a few months of transitioning into the security, threat, and risk industry, I was responsible for delivering data visualizations for the project. Drawing on feedback from stakeholders at both Deloitte and Seerist, I explored various design solutions to effectively visualize the data.
Additionally, the Data Science team provided feedback that bar, line, and pie charts were the most suitable visualization formats for our data.
Graph Exploration
Data View Exploration
Intermediate Iterations
As the product was began to be developed, I collaborated closely with the engineer to understand the limitations and challenges of working with Streamlit, particularly regarding color implementation and component functionality. My primary focus was to ensure that usability remained a top priority and was not compromised throughout the development process.
Figma Wireframe Designs
Map View
Graph View
Data View
Streamlit Screenshots
Map View
Graph View
Streamlit had default component functions, which would later prove to cause user confusion. For example, the only way to list graph types (on the left) was with cards that would collapse when clicked on, so a Select button had to be added, but the collapse function could not be removed. There was also no way to shade the card to give feedback to the user that that graph was selected. Also, all filters had to be placed into each graph’s card, which caused user confusion as users thought the left panel was a list of filters for the one graph, rather than a list of graph options.
Data View
Streamlit had default table and graph filters that didn’t add benefit to the user. I wanted to remove them as they didn’t help key user functions, but stakeholders wanted to keep them as they thought it just gave the user more options, and also it would be difficult to remove in code. During User Testing, users found these default filters confusing causing issues with completing work flows.
User Testing
Overview
I specifically interviewed users who were using the platform for the fist time because I wanted to understand the design usability from two aspects:
Intuitivity
Learnability
I tested three User Flows that were centered around key user workflows in which I had usability concerns.
The users were selected by Deloitte.
Ten users were interviewed and tested.
The six users with relevant backgrounds to the users were used for Affinity Mapping to track trends.
All User Testing Feedback was communicated in the weekly stakeholder meetings with implementation suggestions in order to collaborate on solutions across teams.
Script for User Testing
User Flow 1: “You want to know which countries China has the least influence. Using the Graphs, please find which countries China has the least Influence.”
User Flow 2: “You want to know which countries China has had a change in influence. Using the Data table, please find which countries China has had the most increase in Influence.”
User Flow 3: “You would like to know which country China has the most Political Influence on in the world. Please find which country this is, and what China’s Political Influence score is on it.”
Since I specifically tested on users who had never seen the product, in the first two questions, I tell them which section of the website I’d like for them to find the data, so I could understand if those two sections had usability issues. For the third question, I didn’t tell them where to find the data, to see how easily a first-time user could navigate the website.
Summary of Results
The product had low usability in regards to being intuitive for first-time users.
The product had high usability in regards to learnability as communicated by the first-time users.
User Feedback
User feedback was placed onto sticky notes in Figma’s FigJam with each user being assigned a color.
Affinity Mapping
The sticky notes were then organized into categories to understand trends and determine product implications.
User Testing: Turning Research Trend Insights into Product Solutions
Map View
User expected interactions
Hover: Influence Scores
Click: various feedback from users on expectation (score transparency, zoom, and country data)
My concern for taking the user to Country Data was the confusion trend when taking the user from their current work. Possible solutions:
Open in new tab
Open pop-up
Split screen view: a couple users wanted a split screen to see Map and Data views at the same time (initial dashboard concept)
Graph View
Graph selection
Limitation within Streamlit on conveying to the user which Graph is selected (can’t manipulate card design). Also, there is no ability in Streamlit to collapse the Graph cards not in use when one is selected.
Suggested solution is to update the hierarchy of the graph titles on the cards to convey to users that there are multiple graphs, which was implemented.
Suggested solution is to add a Select button onto the cards, which was implemented.
Radar Graph: while many stakeholders didn’t like the Radar Graph, it was so popular with users, that it was the only graph constantly receiving praise. I used this information to gain buy in from stakeholders on keeping it, but to also add the variations users requested:
Over time
Country comparisons
Graph filter names assigned by the Data Science team caused a usability issue. I suggested filter naming conventions that adhered with outside world connotations, which was implemented.
Data View
Data manipulation options need to be more obvious: since there was usability issues for the users being able to understand how to find information due to confusion around Streamlit default filters and sorts within the Graphs and Data table, I was able to communicate this issue, and gain buy in from stakeholders that rather than providing more to the user, it actually caused confusion, and thus users were unable to accomplish user flow goals. I suggested we remove these default filters, which was implemented.
Data table sorting feature: with removing the Streamlit default sort and filter features, I suggested we add a column title highlight and arrow to not only indicate which column is being sorted, but the location for our sort feature. I retested this solution with other users, and they now easily understand how to sort the Data table.
Data: Country Select
Since this was my first work experience UX project, I struggled with intuitive navigation. Users intuitively wanted to be able to click on a country within the Global Data view, and be taken to that specific Country’s Data view, but were confused on how to return to the Global Data view. The solution provided to add an information icon into the top navigation, so users could access this without confusion on being taken to a different aspect of the site, proved to be a solution follow-up users were happy with.
Influence Scoring and PMESII Methodology
For transparency and weighting of Influence scores: the trend was that various users wanted access to this information on different views (Map, Graph, Data). For users who brought up this issue, I also asked them were they would expect to find this information to gain insight. I pulled the trending feedback, and suggested a design solution of adding an information icon into the top navigation bar, so users could access this information from any view (Map, Graph, Data), which was implemented. Follow-up users were now able to quickly find this information.
Users were confused by a Change in Influence scoring of 0-5, and unanimously preferred a scale of 0-10. This was implemented.
Coloring
Color connotation: I delved deeply into this during stakeholder and user interviews, with resulting feedback being that there was no issue and no suggestions on the colors I assigned for PMESII.
Usability
Work flow starting point: users were often confused thinking they had to start their workflow on the Map view because it was the landing screen. Users tend to feel like they need to start with the Map, which caused me to question which view should be the landing screen (and first view in top navigation order).
I noticed this was only a issue for first-time users upon initially using the product, and not an issue once they were ten minutes into using the product.
This information was presented to stakeholders, and it was decided not to make any changes to design.
Filter persistance across views (Map, Graph, Data)
Some users expected their filters to persist, either from one view to the next, or when returning to a specific view at least. This was the reason and intention for the initial conceptual wireframe for a dashboard.
This information was presented in collaboration and stakeholder meetings to understand Streamlit limitations and final stakeholder decision for MVP.
Future Influencer Countries were added as greyed out in dropdown to convey to users that more Influencer Countries were coming.
Mockups for Marketing
I collaborated closely with the marketing team to develop mockups for marketing materials, working under tight deadlines to ensure they were ready for upcoming conferences in the security, threat, and risk industry.
FIRI Mockup
FIRI One Pager
FIRI Two Pager: Page One
FIRI Two Pager: Page Two
Retrospective
While I was not in a position to advocate for User Testing, largely because Seerist prioritized stakeholder feedback over user feedback, in hindsight, I would have valued the opportunity to emphasize the importance of user testing in these two aspects:
To have started with User Testing to inform initial design solutions, such as the user’s view and navigation needs.
To have incorporated User Testing more frequently into the iterative design process.
As this was my first project in my career as a Product UX/UI Designer, I’m not happy with the wireframe design. Revisiting this project today, I’d redesign the wireframes.
I’d adhere more closely to UX Heuristics.
I’d align the design more closely with the recently created Seerist UI guidelines for design consistency across products.
I would have maintained design consistency by using a light background for the Data View, as I did not have a usability rationale for opting for a dark background.
I would check the Web Content Accessibility Guidelines (WCAG) of the Map coloring.
Map View
Data View