Pavel Samsonov,
product designer

Custom dataset designer for data scientists and quantitative researchers

As part of an effort to pivot towards the fastest-growing market of data consumption – machine learning – Bloomberg Enterprise embarked upon a redesign of the per-security product.

This new cohort of applications demanded a standards-compliant solution that users could program against in order to eliminate manual steps involved in acquiring data. However, the new solution could not alienate existing clients; per-security was our top-selling product that brought in hundreds of millions of dollars per year.

I embarked upon a research initiative to understand how the product was being used at present, partnering up with Sales, Customer Success, Business Intelligence and other units across the world. After modeling the present state, I created a design vision for the product's future, and served as product manager for the team that implemented it.

60%
fewer failed requests
75%
faster time to revenue
20×
faster task completion
90%
reduction in support staff

My first step was to attain a holistic understanding of how Bloomberg Enterprise delivered data to users. I discovered that most contact users had with our data was mediated by a Market Data group on their side and the Content team on our side, due to the complexity involved in setting up data requests.

I partnered with the Chief Data Officer, an engineering lead, and a back-end product manager in order to create a design vision that was desirable to our users, but at the same time could be implemented within a reasonable time frame. I created a series of prototypes and tested them with end-users to refine the scope for a first release.

Leveraging CS and BI understanding of our clients and usage, I targeted a limited-scope beta release at clients representative of the group as a whole who nevertheless used only a small fraction of available features. By building those features first, the team took a Lean approach to development and was able to pivot as opportunities and challenges emerged.

  • User Research
    (2 months)

    My role: Research lead

    Team: CS specialist, Sales representative, Business analyst, Content manager

  • Initial Design & Prototype
    (1 month)

    My role: Design lead and product manager

    Team: Chief data officer, Product manager, Engineering lead

  • Iterative Development
    (6 months)

    My role: Product manager and design lead

    Team: Nine full-stack developers, UI designer

To comply with non-disclosure agreements, all confidential information has been omitted from this case study.

Context

Bloomberg Data License offers clients non-real-time financial data products across multiple channels. As part of an ongoing growth initiative, my team was asked to increase usage for Per-Security Data, a high volume on-demand service that allows clients to define and download custom datasets. Product leadership decided to pursue a two-pronged strategy, aiming to improve usage of the service among existing clients while also making it more attractive to potential new clients.

This project took place amid a platform-wide technical refresh, as Data License transitioned from an SFTP-centric client interface to the Enterprise Access Point (BEAP), a front-end that exposed our data through both a website and a RESTful API (HAPI). Multiple teams were working simultaneously to improve the quality of our data and the ease with which clients could access it.

I took up a hybrid role of product manager and UX designer, working with the engineering teams and PMs responsible for the website and API to make sure that they met the needs of Bloomberg, the paying client, and the end-user.

My roles

Discovery

Per-Security data is a large product, account for the majority of Bloomberg Enterprise revenue. I reached out to stakeholders across the organization to get a complete picture of the decisions and limitations that led to the current state of the user experience, patterns of usage, existing pain points, and any insights that could enrich my design vision.

One of Bloomberg's competitive advantages is its high-touch client relations. I sought out a variety of specialists who could tell me about the client from multiple angles. I worked with Service Delivery team members to understand how clients were currently being trained to use our software, and synthesized a list of issues from the Help Desk backlog to discover areas where clients had recurring trouble. I interviewed Content Specialists to learn what kinds of data clients were requesting. I also reached out to Sales teams across Enterprise to see how they were pitching Per-Security to clients, and what problems clients perceived Bloomberg as solving for them.

I used the PACT Analysis framework throughout my conversations to synthesize their different perspectives and expertise into a complete picture of our users. I was also able to interview several clients about how their firms used Bloomberg data and what they were using it for.

Bloomberg also has excellent analytics capabilities. I reached out to Business Intelligence, the Content team, and Engineering teams to gain access to the data they collected on client usage of individual data features, size and volume of requests, performance of our systems, and many other metrics.

My research revealed three primary personas, engaging with two main scenarios. Each persona experienced a unique set of breakdowns when using the incumbent system.

Quantitative Analyst — Interested in frequently running small requests to test hypotheses about markets.
  • Quants are the primary end-users of Data License. They use Bloomberg data to create models of financial markets, using data science to predict what instruments will give the best return on investment.
  • Quants at large firms do not use Bloomberg tools directly. They put in a request with a market data professional from their firm via a spreadsheet listing the instruments they are interested in.
  • Quants at smaller firms are expected to get their own data. They spend up to 80% of their time simply acquiring the data and cleaning it to make it fit for purpose.
Market Data Professional — Experts on data offerings from Bloomberg and other data vendors.
  • Their day-to-day involves interacting with vendor salespeople and content specialists to research what data is available and whether or not it will serve the needs of the firm's quants.
  • Market data professionals are typically less technically sophisticated than quants, but still use a lot of scripting in their work in order to make sure that they get the right data at the right time.
  • Market data professionals consolidate data orders from inside their firm into large requests that typically run once per day, and then convert the data from the vendor's format into the format required by the quants.
Bloomberg Salesperson — Demoing new products and features is a key skill that drives new usage and revenue.
  • Demos are usually brief and salespeople want to prove the value to the end-user, so they focus on small, ad hoc requests when showing off our offerings.
  • Salespeople usually do not have a technical background. If the salesperson cannot easily walk the client through our software, it will directly impact their ability to get us new clients and more revenue.
  • Salespeople don't have the luxury of specializing in just one Bloomberg product. They compensate by preparing well, and leverage our content specialists to the fullest.

As part of their daily job, all personas had to make frequent and small data requests: salespeople made these requests to demo functionality, quants made them to complete their market models, and market data professionals made them to sample the data and ensure that it was valuable and ready to be parsed by internal systems. All users struggled with the sheer complexity of making a request: the markup required to specify data request parameters was as complex as a full-fledged programming language.

Bloomberg offered a week-long course in learning this markup language, but typically this knowledge would spread through a company via informal channels, and a user trying to do something new would have no idea where to turn to find a canonical example of how to do it.

Users unwilling to learn this markup had to use the Request Builder, downloadable software that was difficult to deploy in an Enterprise environment and complex to use. Technically savvy end-users also did not like going off-tool; they preferred doing everything from the command line of their scripting environment.

Example screens of the Request Builder, showing the Fields Picker and the Headers configuration interfaces. Most of the visible settings were not used in typical workflows.

In addition, market data users and some quants also had to submit large requests on a regular (usually daily) basis, updating the team's data for end-of-day values such as the prices of financial instruments. While the fields of this request usually remained the same, the instruments could change on a daily basis. Users struggled with reliably updating these lists across all of a company's requests, and then spent a lot of time chopping up the large response into smaller datasets to be distributed to the firm's teams.

The incumbent Per-Security Data workflow is presented here as a service blueprint. Market Data teams from the client's side, and Content teams from Bloomberg's side, do a lot of manual work to get data requests from quants to Bloomberg's data service.

Design Process

I presented my research findings to the product team, and led a dialogue that formalized two design principles that we would pursue in order to resolve the issues I discovered.

The first principle was to radically simplify the steps to get usable data. We would expose the Per-Security interface on BEAP and HAPI, drastically reducing the work necessary to access it as either a GUI user or a programmatic user. We would standardize both the necessary markup that composes and submits a request, and the format of the data sent in response. On the UI side, we would remove unused features and simplify the workflows involved.

The second principle was to reduce the burden on users to manage their data. We would allow them to create and modify lists of financial instruments, data fields, and schedules on BEAP. Rather than providing a list every time, users would just refer to the URI of the resource. Because the data model would be shared between website and API, technical and non-technical users would be able to collaborate easily when creating and using these resources.

These two principles were intended to help us increase usage by allowing more quants to make data requests directly. By reducing the time Market Data spends on helping quants, we could let them focus on acquiring new products rather than manage recurring requests.

An added requirement from business and engineering was encouraging the use of the scheduler, to provide load balancing and more reliable revenue.

My team developed a data model to standardize how we would represent a request using three primary reusable resources: a Security Universe containing a portfolio of securities or other financial instruments, a Field List containing data fields and parameters that affect their values, and a Schedule governing how often the request would execute.

I developed two low-fidelity designs to put these concepts into practice and see how they changed the workflows of our users. The first design treated the request as a shopping cart, with the user selecting resources from the data catalog. The second design treated resources and elements as interchangeable, on a tag-style UI. I recruited end-user participants in the market data role to see how they would want to manage data requests using reusable resources.

Users enjoyed the simplified interface that allowed them to focus on the content of the request. However, they found it challenging to navigate the list of all available resources when it grew longer than a few items. Within the resources themselves, users wanted to see their instruments and fields in a table format that allowed them to easily manage lists of hundreds to hundreds of thousands of items. Seeing a small amount of identifiers from each resource was not valuable to them.

Three possible user workflows that users followed when asked to create a new request. Regardless of the path users picked, they had concerns about the visibility of system status.

I designed an updated workflow incorporating this feedback, advancing the level of fidelity of the project. The new designs were easier for users to understand and more efficient for reusing resources. I worked with the engineering team to break the scope down into implementable user stories, and created a phased rollout for the desired capability. I worked with the Content and Sales teams to slowly onboard a small number of users to test our assumptions about real use in a controlled environment. We would track usage through Business Intelligence and BEAP analytics, as well as through regular touch-bases with the trial users.

The list of fields resources in the second phase prototype. Users can see details about each resource, sort the list, and search by name, ID, or description.
Once users select a resource to add to the request, Per-Security Data also shows the user the contents of that resource. If the choice was correct, the user needs to do nothing else.

Our trial users were very excited about the capabilities of the new interface. The new data model broke the mold of what they expected from Bloomberg. We received a lot of feedback that we had not heard before due to users making assumptions about how drastically we were able to change our software. Users were now interested in giving up the paradigm of one data request producing one file, and adopting workflows that would organize multiple schedules and parameters under one entity in BEAP.

I organized and presented this feedback to the backend engineering group, in order to incorporate the new data model design into the platform changes that were going on under another project at the time. I worked together with the product manager and engineering lead to develop a plan for rolling out the features that the front-end would need to support the new workflows.

Content Model - Dataset

User feedback at the prototype stage informed a redesign of our content model. Rather than grouping the execution schedule with the schema, we would allow users to manage them independently.

Content Model - Resource

An individual resource is not simply a list of elements. Resources are versioned, each version contains elements, and each element is associated with input fields that can override its values.

Outcomes

The new Per-Security Data interface minimizes the amount of effort necessary to make consistent data requests, by leveraging the ability to create and reuse component resources. Users were able to set up a request in under one minute without having ever used the original Per-Security interface, and with no training on the tool.

The content model laid over the regions of the UI shows how elements are organized.
Users can pick a resource from the list and view its properties and versions without leaving the resource selector. Users can easily recover when they pick the wrong resource.

Whether creating a request for only a few instruments, or managing a portfolio of hundreds of thousands, users only need to define a resource once, and can update it once to propagate changes across all datasets using the resource as part of their schema. Clients no longer needed to cancel recurring requests, update their instruments, and then re-submit them all - saving a significant amount of time.

Users can create a new version of a resource straight from the list, which will automatically update all linked requests. Users can drag and drop data from spreadsheets, text files, or the Bloomberg Terminal.

Because the user's input is parsed directly in the UI, we can immediately provide feedback when a provided parameter doesn't match expected values. In the past, users would only find out that their request contained an error hours or days later when it actually executed. In the new Per-Security Data, nearly all input errors can be detected or prevented.

The data provided by this user has a mismatched security identifier type. The interface shows the user the location of the error, which can now be fixed before the request is submitted.

Users can set up an unlimited amount of requests for the same dataset through either the website or the API, and can see which snapshots have already been generated and which are scheduled to run.

Learning outcomes

Since this was my first entry into the financial data space, I spent a long time collecting data from experts across the company and making sure that my own understanding of the landscape was complete before starting to design. Due to the sheer scope of the domain and number of legacy features that would need to be supported going forward, the initial designs were quite complex and detailed. Ultimately, that level of fidelity made it more difficult for my colleagues and users involved in testing the product to understand how their workflows would change in this new paradigm. In the future, a closer relationship with experts that tests my understanding of the topic early on through simple designs might make the process substantially more efficient.

I was also surprised by the complexity involved in translating my designs (even fully functioning prototypes) into production code. Many of the systems that Per-Security relies on are decades old, and integrating modern front-end code with legacy infrastructure ended up greatly complicating the work. This initial mismatch between expectations and reality led me to consider more conservative designs than I otherwise would have pursued. Learning to balance an ambitious design vision with achievable scope has been a very valuable lesson.

Due to the complex nature of a Per-Security data request, many things a user could do would result in an error sent back in the response file. Exposing and preventing these errors before the request was submitted increased the complexity of the workflows that I had to design. I learned a lot about handling exception states in a helpful manner that would automatically resolve the problem whenever possible, or give the user the information and actions necessary to do so on their own.