Introduction
Here's a way simulate live streaming data from an existing data set. Let's take the simplest case: you have a data set stored in a CSV file, and you want to treat each successive row of that file as a streamed event.
Solution
Before following the steps below, make sure you first
1. Startup StreamBase Studio and navigate to the Authoring Perspective.
2. From the Studio menu bar, select Connectivity Wizard....
3. In the Connectivity Wizard dialog, leave the Create New Project radio button selected and click the Next button.
4. On the Select a Source dialog screen, click the radio button for StreamBase Feed Simulation and click the Next button
5. In the Select a Data File wizard screen, click the Browse... button, select your csv data file (tempdata.csv in this case), make sure (just for demo purposes) that the Automatically restart simulation when complete checkbox is checked, and click Next
This dataset tempdata.csv is 100 rows that might be readings from ten different thermometers measuring human body temperatures, but it is a file I quickly created using. The sample data I used is attached here as tempdata.txt; it's free to be used; just rename to tempdata.csv after downloading.
6. In the Configure the Table screen, for this demo, ensure that Time-series selected.
Because we are looping the data from the data file, we want to create a unique key for each event, even as the data is repeated by looping. Your data may already have a natural primary key, and you may not want to loop, so you may make different choices.
7. Click Get Schema from Data Source to populate the schema of the LiveView data table the wizard will create, and click Next
8. On the Data Retention screen, click Next to accept a default 60 minute data retention period.
This option is of course something to set based on the characteristics of your own data set. LiveView data always resides in memory, so you want to always be sure not have an infinitely growing table and therefore eventually get the dreaded OutOfMemoryError. Definitely ballpark the math. For this example, at 10 events per second, that's 36000 events per 60 minutes and at maybe 50 bytes per event, in an hour we're only going to accumulate less than 2MB of table data, which we can easily accommodate with the server's default heap allocation of 3GB.
9. On the Add Server Authentication screen, accept the Allow any connections to the server default value, and click Next.
10. On the Summary screen, enter Project Name sensors, and Table Name temperatures, and click Finish.
The wizard will now create a LiveView fragment and application project. This may take some time.
11. On the Project Creation Successful screen, you can either Close or Start Server....
For this demo, choose to start the server.
12. Once the server is started, click Close.
13. Next, use Spotfire Desktop or Analyst 10.x and above and start it up.
(Currently, on the Spotfire Web Player, Streaming data requires a WebSocket connection to perform well. You might be unable to interact with visualizations, and data updates could become irregular. If the WebSocket connection fails, the application will use a fallback mechanism, resulting in a suboptimal experience.)
14. Click Connect to
15. Scroll to TIBCO Spotfire Data Streams
16. Select New Connection
17. In the TIBCO Spotfire Data Streams dialog, enter:
Server: localhost:10080
And click the Connect button
18. In the Views in Connection dialog, under Available tables in database select user > temperatures and click the Add button. Under Views in connection, temperatures will appear:
19. Click the OK button
20. On the Add Data to Analysis screen, observe that temperatures is displayed and press the OK button
21. On the Your data is ready! screen click Start from visualizations
22. Under Visualization Types, click Table
23. At this point you will see a basic table visualization that's updating as the simulated sensor data streams into Spotfire.
Then you can make other Spotfire visualizations as you wish.
Recommended Comments
There are no comments to display.