What’s next for the capital markets arena when it comes to unstructured content? According to research and consulting firm TABB Group, which specializes in the stock, bond and money markets, it’s time to turn text analytics to internally generated and disseminated unstructured data, which holds a high value for customized intelligence.
In new research, “Inner Voices: Harvesting Text Analytics from Proprietary Data,” research analyst Valerie Bogard and senior analyst Paul Rowady discuss that there are more use cases than initially undertaken for text analytics tools. “Although ultra-low latency trading strategies were an early use case in this space, text analytics is no longer limited to just that,” Bogard said in an email to The Semantic Web Blog. “The use of machine readable news has been widely adopted and all major market data providers incorporate market moving news content into their feeds.”
There are very broad applications for text analytics, she notes. “Obviously the main concern for many firms is how to apply it to the front office. We see opportunities to enhance both pre- and post-trade workflows by improving certain methods of risk management as well as trading strategies like volatility prediction,” Bogard says. “But there are also use cases that address the back office: having better analysis of internal data is a clear advantage to compliance. Beyond this there are even business intelligence and operational efficiencies that can identified and employed to optimize work practices.”
The increasing maturity of tools in key aspects will help the markets realize these ends. “We see a lot of room for growth in the processing aspect of text analytics. Although there has been incredible improvement made in the fields of sentiment analysis, metadata and reference data even in just the past few years, we expect these fields to mature even further,” she explains. “Developing practices and creating hybrid solutions to make sure the context and content of the textual data is accurately represented is key for further expansion.”
The report contends that four critical attributes are required to make text analytics solutions easier to implement and utilize. Digesting a comprehensive array of data sources, a growing bulk of which are proprietary, is one of them. “We’ve already seen the number of data sources increase rapidly, especially with the advent of social media. It’s only natural to want to mine these sources for the potential actionable data they are putting out,” says Bogard. “Although some types of internal data don’t necessarily represent new data sources, the capabilities to start drilling into them have become mature enough to conquer these existing stores of textual information.”
Second is offering flexible modeling and output formats. “A key aspect of applying text analytics to new datasets is to be able to configure this new data into whatever format or model the user desires. This type of flexibility allows for the user to be creative in their search for signals or alpha,” she states.
Being highly scalable is a fairly straight-forward attribute, being that Big Data is only getting bigger, she says. “So any system that is developed will have to be able to handle an increasingly larger workload and operational demand.” Number four on the list is offering lower-cost / higher-speed encryption, with Bogard pointing to the recent data breaches of Target and Neiman Marcus as prominent examples of why “the security of data needs no additional emphasis….However, encryption standards have seen great advancements in recent years and seem to be up for the challenge,” she points out.