Jump to content

Gaia Paolini

Spotfire Team
  • Posts

    761
  • Joined

  • Last visited

  • Days Won

    9

Gaia Paolini last won the day on August 27

Gaia Paolini had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Gaia Paolini's Achievements

  1. Actually, they are, but my test ID column was calculated as RowId() so it seems to update automatically. Can you show a sample of your data?
  2. could you try adding a calculated column through IronPython to rank the rows? DenseRank(RowId())
  3. I do not understand what you mean by K-Means getting corrupted. Maybe, if your data changes all the time, the clusters do change, or K-Means is not the right method to capture the structure in your data? I asked Spotfire Copilot to come up with a K-Means Python data function (with column normalization, search for optimal number of clusters, and results in a separate table). It did a good job, with some debugging needed. I saved this example in Spotfire 14.0 which I hope you can open. Script below. (There may be a warning about cyclic dependencies, you can ignore it). First you need to have (or create) a column containing the id of each row. I created a calculated column called "idColumn" using the RowId() expression function. The data function accepts the input data table (you can input all columns, as it will use the numeric columns only), the name of the id column, and a min/max number of clusters. If you want to have a pre-determined number of clusters K, just set min=max=your desired K. The output is a separate table, which can be column-matched to the original one via this id column. I joined the original columns back to this table to visualize the results (see screenshot). If you want to calculate different clusters, you can change the output table of the data function. from sklearn.cluster import KMeans from sklearn.preprocessing import StandardScaler from sklearn.metrics import silhouette_score import pandas as pd import numpy as np # Isolate numeric columns and normalize numeric_cols = inputData.select_dtypes(include=np.number) scaler = StandardScaler() normalized_data = scaler.fit_transform(numeric_cols) # Determine the optimal number of clusters range_n_clusters = list(range(minClusters, maxClusters+1)) silhouette_avg = [] for num_clusters in range_n_clusters: kmeans = KMeans(n_clusters=num_clusters, random_state=0).fit(normalized_data) cluster_labels = kmeans.labels_ silhouette_avg.append(silhouette_score(normalized_data, cluster_labels)) # Select the optimal number of clusters optimal_clusters = range_n_clusters[silhouette_avg.index(max(silhouette_avg))] # Apply KMeans with the optimal number of clusters kmeans = KMeans(n_clusters=optimal_clusters, random_state=0).fit(normalized_data) inputData['Cluster'] = kmeans.labels_ # Prepare the silhouette score curve data curve_data = pd.DataFrame({'Clusters': range_n_clusters, 'SilhouetteScore': silhouette_avg}) # Outputs outputData = inputData[[idColumnName, 'Cluster']].copy() optimalCurve = curve_data k_means.dxp
  4. what version of Spotfire are you using and do you have use/knowledge of Python ?
  5. There should be a hotfix available for this. Please check here:
  6. The marking itself is not lost if you toggle the interactive layer. However, I don't understand you define your 5 mile zone, is it an additional geometry?
  7. how do you want this to work? a) change the zoom on line chart 1 b) press the button c) zoom on line chart 2 is changed?
  8. Would it not be so much easier to have a slider filter in a Text Area for the [Time] column, so that when you slide the filter both line charts react at the same time?
  9. The old TERR function does work for me. Rgdal is no longer maintained, but still available. What is your error?
  10. There are a lot of resources, but not always easy to search, so it helps googling something like "spotfire script to do xyz". For instance, in the case of changing shapes for all scatterplots for all pages, these links helped: https://spotfired.blogspot.com/2017/11/change-scatterplot-shape.html Also a comprehensive link could be: https://community.spotfire.com/articles/spotfire/ironpython-scripting-in-spotfire/ And the API: https://docs.tibco.com/pub/doc_remote/sfire_dev/area/doc/api/TIB_sfire-analyst_api/ I am attaching a dxp in which I wrote a script to change the shape of a selected value of the "Species" column in the "iris" table based on these links. Let me know if you can open it. The script could be made less hard coded but I hope it is a start. It is based on what you select for shape (I used a separate table into which I put all the available shapes, taken from one of the links) and category (setosa, versicolor or virginica). from Spotfire.Dxp.Application.Visuals import * column_name = 'Species' table_name = 'iris' table = Document.Data.Tables[table_name] shape = Document.Properties["shape"] category = Document.Properties["category"] for p in Document.Pages: for v in p.Visuals: if v.TypeId.Name=="Spotfire.ScatterPlot": vz = v.As[Visualization]() if vz.Data.DataTableReference==table: if vz.ShapeAxis.Expression == '<['+column_name+']>': markerType = getattr(MarkerType,shape) vz.ShapeAxis.ShapeMap[category] = MarkerShape(markerType) changeShape.dxp shapes.txt
  11. There is a more recent version using Python. See this article: Specifically the section "Export to file". The data function can be downloaded from the Spotfire-DSML bundle here:
  12. A simple workaround could be to turn your time span into an integer (by surrounding it with the Integer(..) function.
  13. In general, attributes like shape would need Iron Python (to go through all visualizations containing a given column and change the shape). Probably the same for linear fit. For colours, you can assign preferred coloring schemes (continuous or categorical) by going to the column properties menu and selecting a column, and its properties tab. Without specific use cases it is difficult to say exactly.
×
×
  • Create New...