Jump to content

Why 3GB heap size for Liveview


Sharad Honavar

Recommended Posts

The default heap size of an Liveview defined in the ldmengine config -Xmsis 3GB , still complains that it's not enough and showsup in the OS (windows) when the node startsup. The doc says "LiveView fragments require at least 3072 MB, up to 8192 MB or more. The suggested minimum size is 4096 MB."

Is there some rationale behind this. My tables are generally smaller ~100 - 1000rows of 4 - 5 columns each, but may use many such table fragments in a node (or across nodes) on a single machine. And if each of these smaller tables take up 3GB, seems wasteful. Was the guidance intended for high volume streaming data apps, requiring a truncation policy etc. which I can ignore if mine are many smaller non-growing (but changing) tables

Thank You

Link to comment
Share on other sites

Hi shonavar,

 

Before I even think about trying to answer you question, could you clarify some more about what you mean by "may use many such table fragments in a node (or across nodes) on a single machine"

 

Like, sketch out your proposed solution architecture some more

 

It sounds like you are thinking about starting up a separate LiveView engine per LiveView data table, with each engine having a different LiveView Fragment that defines only one (small) LiveView data table. Is that what you are thinking

 

(So far as I know, you may only have one LiveView engine per StreamBase Runtime node -- this is a logical restriction in TIBCO Streaming architecture -- so if I've guessed your design, it's already a non-starter, but maybe I mis-inferred your intent.)

Link to comment
Share on other sites

"It sounds like you are thinking about starting up a separate LiveView engine per LiveView data table, with each engine having a different LiveView Fragment that defines only one (small) LiveView data table. Is that what you are thinking"

 

Correct

 

Thanks Steve. First of all, whether I can run multiple Liveview(LV) Tables in one node or not does not resolve the issue of if there's a special reason I donot know of, why I cannot downsize the engine config to less than 3GB per Table/engine if I only have small but fast changing tables. If not, they are going to each eat up/preallocate engine memory not needed, even if I shove only one LV Table per one node /engine /machine and use multiple nodes on a machine for multiple LV Tables. Right now it's only one LV table which is in the deployment phase for our new installtion pilot, but going foward I envision multiple LV tables in a Node. Or if that's not possible, as you suggested, in separate nodes.    

 

As for one LV table per Node restriction, I don't see why not. I could add a second table in the Liveview Project Wizard and define it as a, say an Aggregation Table or a Transform table datasrouced from another table in the same project, or a Publisher app targeting multiple tables from one Eventflow Datasource.

 

As each LV table fragment runs in one JVM engine, multiple of which can run in one Node, which means I could alternatively add multiple Liveview projects to a single Application Deploy project running in a node with multiple LV table fragment engines. The engine affinity config leads me to believe, once a fragment is attached to an engine, it does not change (get swapped in and out across engines) at execution time. And one or more Eventflow fragments feeding these tables, either in the same Node or from other nodes(using the Container feature of Eventflow).  My understanding is each running Liveview fragment opens a service port while being hosted in a JVM engine, to talk to the Eventflow fragment(s) running in another engine(s) in the same Node or another. 

Link to comment
Share on other sites

Wow, that's a whole bunch of ideas all tangled up together, and I am having trouble untangling your actual goals from your more detailed technical questions. Because I can't yet tell whether your questions are actually all that relevant to your goals, I don't know yet what to suggest.

But I encourage you to get away from the idea that LiveView engine processes are ever going to be particularly small. That's not a fight you are going to win. There's at least a gig or so of overhead just to start one up. Why Why is a cow It just is, at least for now.

 

So I ask again: what is it you are trying to accomplish by having one LiveView table per LiveView engine process

Link to comment
Share on other sites

Sorry my memory problem, Revisting the Concepts doc, I saw this -

 

"LiveView fragments, which consist of one or more definitions of Liveview  tables that are to contain live, streaming data presented by a TIBCO LiveView server...." Awesome! I misunderstood wrongly thinking that I can only have one Liveview table per fragment running in it's own engine

So (1) in my last post was corrrect, I can and want to run mutiple LV tables in my App in the future , I don't care if it's in the same fragment, and cannot have 2 fragments running one table each in separate jvm engines.

Now my worry is - Does the fragment in the jvm take up ~3GB per fragment running in an ldmengine regardless of wheter I have 1 or 3 LV tables OR 3GB per LV table in the singular fragment

And I can easily envision a use case where I want to expose 2 or more LV tables (say both the source and the Aggregated Table) in the fragment to downstream clients, in which case, can I have more than one ClientAPIListener ports attached to each of the LV Tables Or I will be forced to put them in separate nodes

Link to comment
Share on other sites

Other than the worrisome memory issue, my current pilot project of one LV Table in one node fed by an Eventflow Datasource sbapp DataOut stream (the sbapp doing a CDC poll of SQL Server for genetrating the DataOut to LV) which i deployed (using a Deploy Application) runs fine!! by creating one ldm engine, but worrisome for the future goals - 3 tables on one machine would require a ~32 GB memory Windows server machine, I am using up 50-70% of 8GB dedicated machine with all the Windows server stuff and nothing else, we have many small fast changing tables and so want to optimize. If I could get away configuring this engine for the one Liveview table (currently) with say 1GB instead of the default 3GB, since I have only one  ~1000 row table with 4 columns taking a few MB. Looking at the jvm memory, (in Node Manager) before starting the node only about 0.5GB is used for. Then after starting the one LV Table node there's a 20 second constant cyclic sawtooth pattern where the memory grows linearly to 2.3GB and drops abruptly to 0.5 GB again and again, maybe node/app creating all kinds of unused objects which the garbage collector comes and sweeps away every 20 secs. If the GC blows 1.8GB objects, every 20 secs, means they are redundant and unreferenced garbage. I am  going to try reducing  to 1GB and look at the effect, maybe in combination w/increasing the garbage collection frequency in the jvmargs.

 

--------------------------------------------------------------------------------------------------

 

Sorry, didn't mean to mix this issue of multiple LV Table, but talking of future  goals  - want to have multiple LV Tables using the Aggreagte/Transform mechanisms (I can open a separate thread if required on that),  you seem correct, now am confused about only one "Liveview Fragment" per "StreamBase Application" , just saw this in the LV Admin->LV Server System Config" guide.

 

Wow!, How come the whole LV/Eventflow development in Studio offers features like Aggregator / Publisher/ Transform mechanisms from source LV table to target LV table in the same App project implying more than one LV table in the App which I was hoping to use in the future(have not tried yet). Instead of running the tables on separate nodes/Containers and somehow do the Agg/Trans functions.

 

Unless (1) one "Fragment" can accomodate multiple tables(thru threads in the same jvm engine internally maybe) OR  (2) I have to confgure mutiple Nodes&engines in the same Application project with a custom ..dtm.configuration.node conf file defining 3 nodes for say 3 related LV Tables and assign respective fragment affinity to singular ldmengines in these Nodes, and use SB inter-container communication facility to somehow do the Agg/Transform across nodes

 

The Streambase Runtime Overview seems to indicate that I can have multiple engines per node each running a fragment  (SB or LV), and the App /Node start would automagically create multiple ldmengines for the multiple LV tables in my App Project. If not what a waste of multiple engines per node feature.  

 

----------------------------------------------------------------------------------------------------

Link to comment
Share on other sites

OK, phew. Yeah, you still have so many issues to sort through here, but it sounds like you're getting there on your own, mostly. But to provide some more information/confirmation:

 

1) Yes, absolutely, you may have many LiveView data tables in a single LiveView Fragment. That's how the product team expects people to use LiveView.

 

2) There is a not particularly obviously documented limit of 1 LiveView engine per Streaming node. My understanding after a couple brief conversations with a couple LiveView product engineers is that the limations stem from the use of various facilities provide by node services that are only provide at the node level of abstraction and not at the engine level of abstraction. IIRC, for example, the design and implementation of the new-ish features where Alert Rule definitions maintained consistently across an entire cluster only work when there is one LiveView engine per node, and there is some use of the service registry that doesn't distinguish by engine. These didn't sound to me like fundamental limitations that couldn't ever be lifted, but at least they exist in current Streaming releases, and I've haven't heard much in the way of active plans to remove the restrictions. (Customer feedback direct to product management on such things never hurts.....) And yes, much of the documentation about StreamBase clusters, nodes, and engines talk about the ability of there to be multiple engines per node. This is true as a general matter of the StreamBase Runtime architecture, yes. It definitely works for StreamBase engines running EventFlow fragments! But it isn't supported for LiveView engines -- it's not an infrastructural limitation in the Runtime architecture -- it's a design choice the LiveView folks made, perhaps influenced by some fairly basic usage patterns of node services or some limitations in particular services. Oh well. We have what we have. Now all that said, there's nothing to restrain you from running more than one Node per Machine -- so you can have different or replica LiveView engines on the same machine, just only one per Node. Now that said, if you are deploying Streaming into a Kubernetes-based deployment environment, there seems to be at least a strong preference in the current default tooling to have one StreamBase Node per Kubernetes Node. Perhaps that strong preference can be overridden by the end user, but it would for sure be some work to do for the user, and probably not a lot of guidance extant for that kind of thing.

 

2) Right, the recommendation is for a of minimum 4GB heap per LiveView engine . The somewhat paradoxical default setting of 3GB by default -- my understanding is that this default setting is below the recommended minimum so as not to overly strain people's 4GB laptops -- which used to be a very common configuration a few years ago -- at least by default -- for the common  first apps and demos new users typically wanted to run. So that 3GB default might be profitably revisited Not my call. But to answer your question: that recommendation is for the whole engine process, not per LiveView data table. So if you have lots of small tables in the same process, you should see only incremental memory usage growth per table over the baseline amount of memory used by the engine process.

 

So, how do you provision your LiveView process You are current that the minimum runtime process size is smaller in practice than the 3 or 4 GB recommended. Those numbers indeed were meant for fairly high-publish frequency applications and contemplated tables that would be using some memory, though not contemplating that the tables would be huge. But the guidelines (which I don't think we've published anywhere) for sizing LiveView tables go like this:

 

1. estimate how many bytes per row your table will use (on an average sized row). For this part of the exercise, consider each character in a string as 1 byte -- we will account for Unicode later on.

2. estimate the maxiumum # of rows you want to retain in memory at any given time (people usually think in terms of the size of the table in minutes or hows of data retained, but we can't caculate directly from that because each use case is different. Converting from "time" to "rows" involves understanding from the use case how many row insertions/deletions are going to be happening per unit of time. So that exercise has to be left to the solution architect).

3. multiply the bytes per row times the max number of rows

4. multiply that number by 3

 

And that's how many bytes per table to account for when setting your max heap size for the engine process, over an above the "runtime overhead" number. Which for arguments sake, say is 4GB. But you may be able to reduce that based on your own observations, as you are making already.

 

Why multiply by 3 This is a rule of thumb guideline. In practice, that might be generous. But in my experience, it's actually pretty close to what you need to have a comfy relationship with the garbage collector. The reasoning behind the 3 is: a) characters in strings are Unicode and always take at least two bytes and sometimes more, b) there's a certain amount of buffering going on when publishing into or querying from or routing data between tables, so you want some of that estimated per table, and c) you never want to fill up your process memory so tightly that the garbage collector is stopping the world, etc. So the "x3" guideline helps with that, too. So the guideline is a starting point for provisioning estimates that can be refined by actual observations in the deployment environment (or testing environment, if you trust your tests) over time for any given application.

 

3) All the tables (and other client api services, for that matter) in a given LiveView engine are accessible from the a single LiveView Client API List Port. In fact, you only get one LiveView client list port per engine process. There's no way to have any more. If you have more than one LiveView engine per machine, make sure to configure non-conflicting client listen ports for each engine!

 

4) All that stuff you are talking about in LiveView: data tables, transforms, aggregate tables -- those kinds of processing pipelines typically all exist within the same LieView fragment definition. There's no direct way, for example, to have a transform and a "transformed" table in a different LiveView engine than the one the source table is running in. Those chains of processing are all meant to be in the same fragment definition. (I guess you could split them across multiple fragments, but then you also you have to figure out how to connect them together (maybe with LiveView Publish Adapters A message bus etc An exercise for the solution architect). A transform or an aggregation data source doesn't know how to connect to a table in any LiveView engine process other than its own. And, to me, breaking up LiveView processing chains into multiple engine processes sounds like it might be both tricky and maybe even slow. I haven't ever done that myself.

 

Hope these thoughts are helpful. 

Link to comment
Share on other sites

Thanks a lot. Helped dispel misconceptions and stimualte architectural directions. numbers below refer to your numbering.

 

2)The sizing is something I will have to play with as I put more stuff in production and I can always upgrade from my 8GB Windows dev server if needed. It is not a highest priority, but helped lead into other achitectural implications.

 

3) That is a relief. As long as I can expose multiple Tables to one LV port, I am good. Should have guessed, Would have then seen more than one Table listed in my Excel SB/Live Datamart Client LV server connection, if I had 1+ tables in the server which I don't now (Rt. now my only clients are the Excel SB plugin clients).

 

4) Distributing LV Tables across Nodes is something that is definetely very useful. Most organizational data needs, present and future, modeled into LV Tables as business entities are usually interrelated.  The evolution of an architecture in practice (and our vision) starts with implementation in one area of application and growing to other related departments/areas, and we don't want to grow that one initial node by adding tables as we migrate more apps/data into one humoungous Node with n LV tables all needing bits of data enriched from each other. Your pointer to Publish Adapters seems like a solution.  From my understanding, seems I can do this using the "Liveview Query Input Adapter" at the source Liveview fragment/Node/app, with registered queries(continous or snapshot) depending on the app /data/enrichment requirements of the target table , feeding a "StreamBase to StreamBase Output Adapter" in the source Node/App shooting into the Input stream of target host/port App, OR the Output stream in the upstream SB App/Node read by the downstream "Streambase to StreamBase Input Adapter" which then feeds into a "Liveview Publish Adapter"  in the target Node/App/table. Ports on same machine of course have to not conflict by parametrizing them in the conf or other hard coding mechanism.

Link to comment
Share on other sites

I'm not going to confirm or deny your ideas in response to 4) above, because I can't tell what it is you are trying to accomplish, exactly.

 

It sounds like you are thinking about building some kind of "LiveView Routing Proxy" using some combination of LiveView and StreamBase engines, and if that's a correct guess, I suspect that's maybe theoretically possible but probably hard to do in practice. But maybe you are trying to solve some other problem.

 

It might not be the worst thing if separate LiveView engines that were hosting different sets of tables targetting different application areas had different LV URIs (hostname/IP address and/or ports). Or it might be, depending on a bunch of factors in your organization. I'm not in a position to know.

Link to comment
Share on other sites

Thank you Steve

 

I cannot nail an exact architetural outline unitill I know of the capabilites and limitations. Generally here are the goals.

 

1) We want to use LiveView tables fed from live data like changing trasactional  SQL server DB tables/Market Data/ bloomberg/ etc sourced into Streambase from Tibco Adapters or custom enqueing apps or CDC (change data capture, which is the first project I have implemented) . So they can be used by downstream clients like Excel, REST .NET dequers. Looking at 10 -20 tables across all Application areas in time as we expand the implementation. 

 

2) As in any large custom App, the application areas and the 10-20 tables cannot be neatly separated into independent silos of application logic. There's networks of dependencies across  the Live tables we want to model. Usually these dependencies involve enrichment processes like aggregations/joins/transforms etc which could be complex stuff of the base tables. 

 

 3) Having said that, my architectural concern was the ability to partition the model into different nodes and be able to access(query/publish) Liveview tables in other nodes without being forced to embed 10-20 tables into one Node. A reasonable expectation, alluded to in the Node/Table architecture which you seem to indicate is non-standard. 

 

 

Link to comment
Share on other sites

I think we're well into solution architecture territory that's not that easy to cover in a TIBCO Community post. That said, I will offer this broad rule of thumb:

 

"If the consumer is a human, using a LiveView table is probably what you want. If the consumer is a program, a StreamBase Query Table is probably what you want."

 

That's one way of saying that multi-engine LiveView table-oriented processing chains are not squarely in the design center of LiveView. The LiveView processing chains generally want to be in one engine, however large that ends up being. If you want a multi-tier streaming architecture, consider using EventFlow engines for all but the human-facing tier.

 

Now, there's a limit to what you can put in one LiveView engine, and that's addressible JVM heap.

 

So, perhaps it is not going too far to say that the soul of LiveView application architecture is balancing these two constraints.

 

Link to comment
Share on other sites

Thanks again. Our down stream clients are machines, primarily the Tibco Excel Plugun for now. The main strength of Liveview is it's "Client Layer" on top of a "Query Table", which we put our bets on,  which handles and inital snapshot and then Continous queries, using REST or other LvClient API, which I would have to custom build for Streambase/Query Table if we eliminated Liveview - too much work.

 

The Enterprise need is to coordinate all these mutliple source / enriched Tables efficiently to present  to downstream REST/http/.NET/EXCEL clients. The Tibco Excel plugins work like a charm with Liveview, just connect and forget and it gets SNAPSHOT+CONTINOUS all auto synced. We use Tableau so not planning on using Spotfire Analyst for  now (which we do have a license for), but maybe Tableau clients too. 

Link to comment
Share on other sites

The TIBCO StreamBase Add-in for Microsoft Excel I think would count more as a "human" consumer than a "machine" consumer, though admittedly those categories don't have entirely bright lines between them. I don't know what you are doing with the LiveView table data in Excel, but, sure, having the "machine" driving even chains of Excel formulae in multiple sheets of a workbook should work quite nicely. The LiveView data tables in this case still operate as a human (or, rather, LiveView Client)-facing tier in the stream-processing landscape.

 

What I'm trying to gently steer you away from is having multiple tiers of LiveView data tables in different engine processes where the query results from one tier are published back into another tier using LiveView Query Clients and LiveView Publisher Clients. Or to have a LiveView Query Client somewhere that is "persisting" the contents of LiveView Tables elsewhere (like a relational database or Hadoop or something). Those architectures tend to be quite problematic. Better to use tiers of EventFlow engines with Query Tables and maybe a messaging bus between the tiers, in most cases. Then persisting the streaming data (if that's a requirement) is done in parallel (perhaps with an appropriate output adapter) to publishing the data to LiveView tables for ad-hoc read-only querying.

Link to comment
Share on other sites

"Those architectures tend to be quite problematic." Any pointers on why,

 

Ad hoc queries are not very useful in the financial world where real time live streaming data with value adds like Rest Pushes, and Livview client REST in-memory streaming DBs is what we were hoping for.  

 

If downstream App A needs bloomberg price data enriched position data from another adapter(or SB srouce) as ad hov queries and then another downstream app B needs the BBG price data enriched with say another non Bloomberg feed  as stgreaming, and 3rd App C just wants plain BBG prices, I don't want to maintain 3 Query tables  or 3 Liveview tables but reuse one LIveview BBG copy , but just one LV repo of BBG prices joined enriched with to other source data which clients can either Query or Stream wihtout a separate app for each.   

 

 

Link to comment
Share on other sites

Ah, I wasn't being clear about what I mean by "ad-hoc" in relation to queries. I'll explain more.

 

From a TIBCO Streaming-centric view of the world (a perspective that I get to indulge in, it's my job, you know) -- a flow* of EventFlow operators can be viewed as a relatively static but continuous query against one or more StreamBase streams, perhaps with StreamBase Query Tables participating in the query. It's static in the sense that one has to change the EventFlow code itself -- which at the very least requires removing and adding a StreamBase container at runtime but more typically involves removing an application and installing a new verison of an application. The lifetime of an EventFlow "query" is thus tied pretty closely to the lifetime of StreamBase engine in which it is instantiated.

 

LiveQL queries are ad-hoc in the sense that the queries are not part of the application logic, but rather can be started and killed by LiveView clients. LiveQL queries are usually continuous as well, but have their own lifecycle largely independent of the lifecycle of the LiveView fragment instance they are querying against.

 

I think you are understanding the notion of ad-hoc query in some different sense. I'm not claiming my use of the term is definitive or universal.

"Those architectures tend to be quite problematic." Any pointers on why

 

Well, because LiveView data tables aren't really meant to be queried by other streaming applications, and I would say almost especially in automated trading applications that are looking at time-sensitive streams of, say, market data and orders. At least if those streams are high frequency in the sub-second or sub-millisecond sense. LiveView processing chains can be a bit batchy, whereas StreamBase stream processing is event-by-event. (Though if you are updating things like no more than once a minute, it makes no particular difference. And there are -- much to my astonishment growing up in a high-frequency world -- plenty of, for example, wealth management apps out there for which once a minute is total overkill.)

 

Also, LiveView is meant to be, basically, a cache of segments of streams. If you start persisting off your cache, you're persisting from the wrong place in the architecture, and you'll also end up wanting more cache coherence than LiveView was designed to provide, if you have replica LiveView engines going, which eventually you probably will, and you may want more than the eventual consistency that LiveView query semantics provide. So it's just not a good road to down very far for, say, automated trading applications or any kind of streaming data ingestion application.

 

That is, if you ever want event-by-event processing, strict consistency, and even transactionality, it's StreamBase that's the tool as opposed to LiveView.

 

And whether you really want to build some big denormalized data structure as the center of your stream processing world rather than as something you build near the edges as needed -- well, really that's up to you. Very application -- and to some extent performance -- dependent decision. It'd be presumptuous of me to make grand, broad, sweeping prounouncements about that issue, although some people do, I guess. Much depends on where in the overall processing chain you are, who your users are, what your resource constraints are, how latency-sensitive you are, etc.

 

(*Oddly enough, a flow is not a well-defined term in StreamBase EventFlow. (Neither is event, but I digress.) But flow is a useful term, sometimes. What I mean by a flow is a directed graph of EventFlow components. That graph could span EventFlow modules, typically, though usually not containers. But it could.)

Link to comment
Share on other sites

Thanks for the in-depth perspectives Steve.  

 

Firstly , when I said trading apps and ad-hoc not working for us, I did not mean high frequency. (Though my past in hi-freq algo was all in C++, no bells and whistles), At the current place, we are a Wealth mgmt.  But trades coming into Streambase every couple MINUTES from a MS SQL Server (using my CDC implementation) needs to be SYNCHRONOUSLY pushed to analytic clients (most important now being the legacy Excel apps,). 

 

The reason we got Streambase, initially to start with, because 10+ and growing VBA or VBA.NET plugins +bloomberg and other OMS plugins in trader's spreadsheets creating havoc. Liveview/LiveDatamart plugin is supposed to run efficiently in a different Excel COM process  and also offload most heavy logic it to Tibco SB/LV processes talking to backends, which seems to be doing well in my initial tests now. 

 

Now I do use Query tables jdbc and regular in my app do intermediate Streambase/Eventflow  stream processing/enrichment of data(from static jdbc queries operator calls to Master DB backend tables in Eventflow)  before feeding into a Liveview Table. I am aware of quering the stream. The biggest drawback of querying streams is we have to write the SB query logic at the client, in our case back to VBA.Net in Excel, back to same problem. The Liveview table adds at "Client layer"  for ad hoc snpashots AND PUSH capability as outlined the Comcepts guide, facilities like snapshot+continous, and future possibilites for REST or LV Client types in .NET LV tibco APIs.

 

About performance issues like cache coherency etc, I am ok with it as long as there are no memory leaks or slow degradation of memory performance. My guess is the JVM will optimize garbage collection. memory allocation  and cache issues. 

Link to comment
Share on other sites

Yeah, thanks for the clarification. I don't see any reason why you couldn't use LiveView tables directly from Excel here. And using SB to create what goes into the LV tables sounds on point.

 

I'm going to quibble about whether or not any of that is or could ever be SYNCHRONOUS to the push from CDC, though. Querying from LiveView is ALWAYS asynchronous to publishing into LiveView. And goodness knows the RTD server in the Excel Add-In is not synchronously displaying data queried from LiveView -- that's a property of Microsoft's RTD architecture and not a TIBCO choice!

 

But both of asynchronicities are measured in probably small #s of milliseconds, so if you are only getting updates in periods measured in minutes, doesn't seem like anyone would notice the difference in practice, if I'm understanding you correctly. 

Link to comment
Share on other sites

I think this sounds fine -- Excel accessing LiveView tables that are fed by EventFlow apps.

 

I'll mention that only the CDC-->EventFlow-->LiveView Publish Adapter can be SYNCHRONOUS here, and then only if you are being very deliberate about it.

 

LiveView publish is ALWAYS Asynchronous to LiveView query processing

 

The Microsoft RTD server architecture -- which is what the StreamBase Excel Add-In is using -- dictates that the information coming into the RTD server (in this case from a LiveQL query result set) is also going to be asynchronous to the display of the corresponding data to the Excel user.

 

But the latency introduced by these asynchronicities is measured in small #s of milliseconds at worst, typically -- if your update period is comfortably measured in minutes, it's doubtful any human viewer of said data would notice

 

Link to comment
Share on other sites

Yes, since it is a queue based, push based output of Liveview to downstream clients external must be async. But we can live with the small latency. My bigger concern is integration/coordination of multiple Liveview tables across Nodes and applications, after your comments that it not be maybe be standard practice, and may be problematic if I understand correctly. I am going to try with the apparent solution of using a combination of "LiveQuery Adapter", "Liveview Publish Adapter" (not the in-project "Eventflow Publish datasource") and "StreamBase-StreamBase Inp/Out Adapters".
Link to comment
Share on other sites

I explicitly make no statements here about whether I think that design will work for you or not. As I mentioned before, we're kind of at the far edge of what a solution design discussion can accomplish well on a Community Answers thread, and I think I'm going to leave it at that edge and not try to venture beyond.
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...