Jump to content

Olivier Keugue Tadaa

Spotfire Team
  • Posts

    149
  • Joined

  • Last visited

  • Days Won

    11

Community Answers

  1. Olivier Keugue Tadaa's post in How to avoid empty cells when summing across multiple tables? was marked as the answer   
    Hi Sevda, 
    When you use multiple tables on a single visualization, there is always a "controlling table" from which the column matches apply. Therefore in your case, "table 1" will show all the values for the regions. For the ones that are not in "Table 2",  a standard addition will show null (since the second value is null).  I can propose a workaround which is using the SN (substitute null) function for the other tables rather than the controlling table
     
    -- This expression will be empty for missing data un "table 2" Sum([value1]) + Sum([Data Table (2)].[Value 2]) -- This expression will display the right value even for missing data in "table 2" Sum([value1]) + SN(Sum([Data Table (2)].[Value 2]),0)  
    Let me know if this helps.
    See the attached sample image 

  2. Olivier Keugue Tadaa's post in Why there is an "Empty" bar on my bar chart? was marked as the answer   
    Hi PBR,
    This below category expression creates an Empty(NULL) value 
    If(([Month_String]="${Months}") and ([Year]>=2005) and ([Year]<=2025), [Year], NULL) To fix this, I'd suggest you create a calculated column (e.g. Display_Year)
    Display_Year :::::: If(([Month_String]="${Months}") and ([Year]>=2005) and ([Year]<=2025),[Year], NULL) Then use the below on the category axis
    <[Display_Year] as [Year]> Finally, add this to the  expression to limit the data:
    ([Time]>=DocumentProperty("StartDate")) and ([Time]<=DocumentProperty("EndDate") and ([Display_Year] is not null))
  3. Olivier Keugue Tadaa's post in Subtract one day from transaction date was marked as the answer   
    Hi Tramle
    use this expression 
    DateAdd("day",-1,[Transaction time])
  4. Olivier Keugue Tadaa's post in Did Spotfire 14.0 LTS and TDV 8.8 use JAVA? was marked as the answer   
    Hi Mark,
     
    The Spotfire installation packages now include the Eclipse Temurin OpenJDK instead of the Oracle Java Development Kit used in previous versions.
    Note: Eclipse Temurin is the name of the OpenJDK distribution from Adoptium. The Eclipse Adoptium Working Group is the successor of AdoptOpenJDK. The primary goal of Adoptium is to promote and support free and open-source high-quality runtimes and associated technology for use across the Java ecosystem.
    Please have a look at the full article here 
     
    Regarding TDV Studio and Server 8.8,
    They are  also shipped with the java version "17.0.7" 2023-04-18 LTS  (not sure which distribution)
     
    I hope this helps
     
  5. Olivier Keugue Tadaa's post in creating a list box property control where to control several KPI formulas to show on the same graph was marked as the answer   
    No problem,
    For a single selection, proceed with the below steps
    1- insert a property control and choose List Box
    2- create a document property (e.g. expressions) of type String
    3- in the "set property values through", choose Expressions
    4- add your expressions (display name, and expression) (see below)

     
    5- on your y-axis, choose "custom expression" and add ${expresions} as the expression

     
     
    Now you can choose your KPI an view it on your visualization

     
     
  6. Olivier Keugue Tadaa's post in how to combine two columns in the same chart without trellis was marked as the answer   
    Hi Enkeled,
    It depends on the chart type.
    For the Line chart, you can select the Line By (column names), keep the color by, and set the trellis-by to (none) 
    -- 
    If you use a combination chart, you need to do the following :

    Set the series-by to (column names)

    set the Trellis-by to (None)
     

    and use the side-by-side bars as the layout
     
    I hope this helps

     
     
     
     
     

  7. Olivier Keugue Tadaa's post in Upload export with spotfire V2 restAPI was marked as the answer   
    Hi Apokrifit,
    " This API is used to upload & browse files of the Spotfire library.  The upload of a file is made through a number of calls."
    Basically what is said here is that you can use it to upload a library artifact of any of these supported types  (the list is the one you mentioned here https://docs.tibco.com/pub/spotfire_server/14.0.2/doc/html/TIB_sfire_server_tsas_admin_help/server/topics/spotfire_item_types.html)
    This is unfortunately not meant to replace the library import/export tool that can be done either by the command line (you need the config tool) or the Analysts user interface Library Administration. indeed the zip file is an archive that can only be interpreted by the import-export framework.
    To achieve what you are doing, I'd encourage you to install the config tool (https://docs.tibco.com/pub/spotfire_server/14.4.0/doc/html/TIB_sfire_server_tsas_admin_help/server/topics/running_the_configuration_tool_on_a_local_computer.html)  and run a command similar to this one
    config import-library-content --file-path="..\application-data\library\<the zip file name>"  --conflict-resolution-mode=KEEP_NEW --user=melt --include-access-rights=true --item-type=all_items --library-path=/  
     
    I hope this helps,
     
  8. Olivier Keugue Tadaa's post in Function “BinByEvenIntervals” is not working as expected. It this a bug in Spotfire? was marked as the answer   
    Hi Rens
    On the visualization properties, the Category Axis, click settings and  "Current filtered Only" for "Evaluate axis expression on.."
     

     
    Let me know if this is what you expected. 
  9. Olivier Keugue Tadaa's post in How to insert a data read from ActiveMarkingSelection as a row into a new datatable? was marked as the answer   
    You should assign your data tablename variable with a string like this
    datablename = "New_table"
    see below 👇
     
    #add these imports from System.IO import MemoryStream from System.Text import Encoding, StringBuilder from Spotfire.Dxp.Data.Import import TextFileDataSource from Spotfire.Dxp.Data import AddRowsSettings #........ #at the end of your script builder = StringBuilder() builder.Append("start_date\tend_date") builder.Append(start_date) builder.Append("\t") builder.Append(end_date) stream = MemoryStream(Encoding.UTF8.GetBytes(builder.ToString())) dataSource = TextFileDataSource(stream) datablename = "new_table" #This Constructor performs an automatic match, no columns are ignored. settings = AddRowsSettings(Document.Data.Tables[datablename], dataSource) Document.Data.Tables[datablename].AddRows(dataSource,settings) # dispose the stream stream.Dispose()  
     
  10. Olivier Keugue Tadaa's post in How to schedule to send by email a report in pdf, excel file a certain analysis? was marked as the answer   
    Hi Enkeled
     
    Don't change the property names (the hidden part on your screen) but the value. The error states that the program did not find the expected properties (since you renamed them)
    See below an example of what it should look like (change the values) 👇
        <preferences>       <!-- SMTP Host for Email Notification -->       <add name="Spotfire.Automation.SendMail.SMTPHost" value="smtp.gmail.com" />       <!-- From Address for Email Notification -->       <add name="Spotfire.Automation.SendMail.FromAddress" value="olivier.keuguetadaa@cloud.com" />       <!-- Timeout (seconds) for the library import operation for the Import Library task -->       <add name="Spotfire.Automation.LibraryImport.TimeoutInSeconds" value="300" />       <!-- Timeout (seconds) for the library export operation for the Export Library task -->       <add name="Spotfire.Automation.LibraryExport.TimeoutInSeconds" value="300" />       <!-- A file path that can be used when exporting images, PDFs and data from the different tasks to the file system. -->       <!-- This path can be inserted into the different paths using the Insert Field in Job Builder dialog. -->       <!-- Note the path must be a fully qualified path like '\\server\share' and not a relative path. -->       <add name="Spotfire.Automation.Common.ExportPath" value="" />     </preferences>  
     
     
  11. Olivier Keugue Tadaa's post in How to count the occurrences of a specific value in a column while in Data canvas was marked as the answer   
    Hi Nathalie,
    if we consider this table structure 

    you can create a calculated column "occurrences"
    occurrences :::: Count([value]) over ([doc num]) 
    and then filter to only show document numbers appearing only once
     
     
    I hope this helps
  12. Olivier Keugue Tadaa's post in How to change the service url in connector through programming? was marked as the answer   
    Hi Weilong Lyu,
    The solution to your issue should come from the initial design and inception phase of your developments.
    In your case and given the purpose of your development and the necessity to move objects from DEV to PROD for example, it is not a good practice to embed any resources subject to change (such as the connections) into the report. 
    the first good practice is to separate into at least two layers: the data layer and the dashboard layer.
    Then you will transport them to the new environment while keeping the dependencies (ensure you don't create new GUIDs). 
    With this approach, you can independently change the data layer (that normally should be done only once).

    Once the reports are migrated, you will have to validate the dependencies ... 
     
    Let me know if you need more help
  13. Olivier Keugue Tadaa's post in 2 way error bars on scatter plot was marked as the answer   
    Hi Idto,
     
    I think the issue is related to the way you define your upper and lower errors 
    Here is a simple setting that I've done 

     
    and I can display the below without any further need for aggregation (I've reverted the "marker by" to its default (Row Number) or baserowid() ...) 

    I believe you need to review your upper and lower error values and make sure they are relevant to your data in the x-axis and y-axis 
    I hope this helps
    Dataplots.dxp
  14. Olivier Keugue Tadaa's post in Using heatmap to show % of each Y category over the entire X category was marked as the answer   
    Sure :::: sum(Integer([Survived])) / count([PassengerId]) as [%of Survived]
    And you will obtain your Googled image 🙂

  15. Olivier Keugue Tadaa's post in How can moving average include complete 13 weeks in serious was marked as the answer   
    Hi John, I'd like to propose a different approach based on calculated columns. Indeed, when you use a custom expression it only applies to the filtered data on the visualization. Therefore, your first 12 periods will always show wrong values for moving averages when you filter your data.
    Hence,

    First, add the following columns 
    YearWeek :::: YearAndWeek([week])
    moving avg::::Sum([value]) over (LastPeriods(13,[YearWeek]))
    Then, use the following Y-Axis custom expression ::::  Sum([moving avg])
    you will have this

     
    instead of that 

    Note that both visualizations share the same filters 

    don't forget to set the X-Axis as categorical like this...

    Let me know if this helps please (I've attached my data source)
    moving avg.xlsx
  16. Olivier Keugue Tadaa's post in Bring all wells at same start point to compare duration was marked as the answer   
    Hi Cristian
    I have created the attached dummy dataset and have obtained, like you, the below chart to start

    then add the following two calculated columns 
    [day zero per well]. :::: first([Date]) over ([well]) [days duration]:::  DateDiff('day',[day zero per well],[Date]) Display the chart by [days duration]

     
    I hope this helps solve your issue
     
    community_cases.xlsx
  17. Olivier Keugue Tadaa's post in The value of the column 'COLUMN_NAME' represents the name of the column from which the value should be retrieved in the that row. was marked as the answer   
    Hi Arenti,
    I hope this can help !!


  18. Olivier Keugue Tadaa's post in Adding Multiple Cumulative Sum was marked as the answer   
    Hi John, sorry for my late reply.
    Please use this one (I have removed the THEN since it is usually used for optimisation purpose or when you have in-db data source).

    SN(Sum([New_Backlog]) OVER (AllPrevious([Axis.Columns])),0) + 
     SN(Sum([New_Early]) OVER (AllPrevious([Axis.Columns])),0) - 
    Sum([New_Commit Qty]) OVER (AllPrevious([Axis.Columns])) - 
    Sum([FGOUT]) / Count([ExtractDate]) OVER (AllPrevious([Axis.Columns])) as [delta]

    It should work now
    Or if you should use THEN,
    Since after THEN you can only use Value (assuming it contains the previously calculated metrics),
    and all calculations are made against the same OVER (AllPrevious([Axis.Columns]))
    then you have this alternative expression with the same result, while keeping the benefits you had with the THEN
    SN(Sum([New_Backlog]),0) + 
    SN(Sum([New_Early]),0) - 
    Sum([New_Commit Qty]) - 
    Sum([FGOUT]) / Count([ExtractDate]))

    THEN sum([Value])

    OVER (AllPrevious([Axis.Columns]))

    AS [delta]

    Please let us know if it works for you
×
×
  • Create New...