Introduction
This is the ninth year that our joint member agencies have come together for this conference. It’s been a wonderful five-way partnership. But the honorific of “greatest crossover of all time” must be awarded to the theme and corresponding title of my remarks today.
For those who know of my proclivity to lace my speeches with music references, no, I am not referring to the crossover of Taylor Swift and Travis Kelce. Instead, I’m referring—of course—to the epic 29th episode of Star Trek: The Next Generation, titled “Elementary, Dear Data,” in which 221B Baker Street meets Enterprise-D in a brilliant Sherlock Holmes-style mystery. It’s a perennial fan favorite for Trekkies and Sherlockians alike. But the reason I mention it today is that I’d like to speak about how data—transactions-based, transparent, and well documented—is elementary to our collective understanding and decision-making at the Federal Reserve and beyond.
Before I get in any deeper, and rest assured I will, I need to give the standard Fed disclaimer that the views I express today are mine alone and do not necessarily reflect those of the Federal Open Market Committee (FOMC) or others in the Federal Reserve System.
Data Dashboard
Sherlock Holmes stressed the importance of “data, data, data.” I, too, have been often quoted emphasizing the need to be “data dependent.”1 In Sherlock’s world, data means fingerprints, footprints, handwriting, and ciphers. In our world, data means economic indicators like gross domestic product, the Consumer Price Index, and the unemployment rate. These numbers make up the dashboard of what’s happening in the economy.
These are numbers we can trust. Government agencies in the U.S. and abroad put an enormous amount of effort into developing the methodology, implementation, and design of these official statistics. While no specific data series is without its limitations, we have an excellent understanding of how these numbers come together and behave over time.
Financial markets have their own data dashboards too. Equity indexes, bond yields, and the value of the dollar are just a few aspects of understanding what’s going on in markets.
And policymakers like me aren’t the only ones who are data dependent. Businesses, households, journalists, and so many others rely on data to support their understanding of the world around them.
Capital Mistake
To be clear, having access to an abundance of data is a wonderful problem to have. Think about the plethora of data you can pull from a terminal and the endless ways to use it. Many of us have spent more hours than we’d like to admit analyzing, plotting, and running regressions on data. And while we assume that most data are “good,” there are, unfortunately, “bad” data, too. The very same terminal that gives you access to numbers that are supported by millions of transactions could also give you access to numbers that have not seen a transaction in quite some time, if ever. And it can be very hard—too hard, in fact—to spot the difference.
This is dangerous. I will echo the warning of our detective friend Sherlock Holmes: “It is a capital mistake to theorize before one has data.” I’ll go a step further and say that it is a capital mistake to theorize before one fully understands the origins of the data one has downloaded.
LIBOR No More (But Still Important)
Of course, the prime example of a capital mistake of this nature is LIBOR. Which, by the way, officially ended 139 days ago, not that I’m still counting.2,3 Good riddance! OK, I had to slip in one music reference, but that’s all.
The lesson of LIBOR is that there are times when what we think of as strong financial market data is only a mirage. LIBOR was one of the most widely used benchmarks in the world, underpinning hundreds of trillions of dollars—that’s trillions, with a “T”—of financial instruments and contracts. We all know how that ended: LIBOR was inherently fragile and subject to manipulation. It is astonishing to think about the entire global financial system relying on the small set of transactions underlying LIBOR. In fact, during periods of financial market stress, LIBOR was based on no transactions, which is what opened the door to the fraud and bad behavior we now associate with it.
LIBOR was a wake-up call. It led to the collective realization that things we think of as data may be merely smoke and mirrors. That’s why the Federal Reserve, the Financial Stability Board, and many others invested heavily in the transition off of LIBOR and on to robust reference rates—like the Secured Overnight Financing Rate (SOFR)—that are accurate, carefully constructed, and transparent.
Good Data
As we move forward in the post-LIBOR era, it’s important that we continue to prioritize transparency and clarity in data, especially financial market data. This is particularly true in the age of AI, when the sources of data are harder to trace.
So, in addition to SOFR, what do “good data” look like? How can we, as Captain Picard would say, “Make it so”?
To start, I’d point to the Principles for Financial Benchmarks set forth by the International Organization of Securities Commissions (IOSCO), which established a very stringent and specific model for reference rates. As an administrator and producer of reference rates, the New York Fed is committed to producing rates aligned with this gold standard.4
Other examples include our regular market surveys.5,6 Questions for both the Survey of Primary Dealers and the Survey of Market Participants are readily available on our website, as are the respondents and aggregated responses. Our Survey of Consumer Expectations follows a similarly demanding standard.7 If data are worth using, it’s absolutely worth the effort to make sure people know what they are getting.
We have also increased transparency with Trade Reporting and Compliance Engine (TRACE) data. This initiative, which was started in 2017 to fill in the gaps on Treasury transaction data, is a significant component of a broader interagency effort to enhance understanding and transparency of the Treasury securities market. This has proven valuable as it has allowed the official sector to closely track developments in the cash Treasury market. Also currently underway are efforts to further increase the transparency of this data to the public, including plans to share transaction data for on-the-run nominal coupon securities.8,9
Can’t Make Bricks Without Clay
Efforts like these are important because increased transparency and clarity around data leads to better decision-making. “I can’t make bricks without clay,” Holmes said in The Adventure of the Copper Beeches. He understood that he first needed to compile the facts before building out the case. I argued a similar point in 2021 about LIBOR. If you try to build on a foundation that is not absolutely sound, you are risking trouble at some point in the future.10
Unfortunately, this issue is frequently underappreciated by users of financial market data, as transparency and information on the sources of data are often scant.
I will use my favorite example of inflation options to illustrate the broader point. During the recent bout of high inflation, some journalists, researchers, and analysts have trumpeted the “market’s estimate” of the probability of certain high inflation outcomes using “data” on the prices of inflation options. These so-called “prices” are easily downloaded from data platforms.
But here’s the catch: based on our market contacts and public reporting of derivatives transactions, these aren’t data at all. There have been no trades reported in the U.S. inflation options market since early 2021. The so-called data that people are citing are generated by a model, not from investors putting real money on the line as is frequently claimed. The situation is similar in Europe. Although there are scattered trades in euro area and UK inflation options, market contacts tell us that liquidity in that market is extraordinarily thin, and most of what little activity there is in this market is related to complex financial products and the risk of deflation and not hedging higher inflation.
More broadly, there is an opportunity for central banks, governments, and the private sector to come together to improve data transparency and accuracy. It takes a global village—regulatory groups, industry groups, and global standard setters like IOSCO all have a role to play. We must use forums and collaborations like these to think through the opportunities to further develop data transparency and make improvements. One benefit of this conference is the partnerships it has fostered. Participants have accomplished so much by working together. And given our collective reliance on data, this must continue to be a priority moving forward.
It's important to acknowledge that increasing transparency in some cases may face challenges. But that is not a reason to hinder our work in this space. Past experiences have proven that there are solutions that allow for greater transparency and confidence in data, without distorting or undermining the markets themselves.
Closing
I hope that you took two things away from my remarks today. The first is that you’ve been inspired to go home and dust off your Star Trek DVDs. The second is to heed my call to action. We must continue to work together to increase data transparency and understanding, so that we can have greater market confidence and make better decisions.
It is elementary, my dear colleagues.