Jump to content

Atheer Al Attar

Spotfire Team
  • Posts

    23
  • Joined

  • Last visited

  • Days Won

    3

Atheer Al Attar last won the day on August 12

Atheer Al Attar had the most liked content!

1 Follower

Converted

  • country
    US
  • productofinterest
    Data Science
  • pronouns
    He/Him/His

Recent Profile Visitors

223 profile views

Atheer Al Attar's Achievements

  1. Addressing the Challenges in Oil & Gas Forecasting and Valuations The oil and gas industry faces unique challenges when it comes to forecasting and valuations. Accurate production predictions are critical for making informed investment decisions, optimizing performance, and maximizing returns. Traditional forecasting methods are time consuming, often fall short, struggle with data inconsistencies, complex reservoir dynamics, and the need for rapid, reliable predictions. This is where the Analytics for Energy (A4E) suite, developed by Blue River Analytics, comes into play, specifically designed to tackle these industry challenges head-on. The Need for Advanced Analytical Tools in the Oil & Gas Industry The complexity and scale of data in the oil and gas sector demand sophisticated tools that go beyond conventional approaches. The A4E suite addresses this need by offering comprehensive templates and workflows tailored for Exploration and Production (E&P), Minerals, Banks, and Equity sectors. Unlike black box solutions, A4E is pure Spotfire which provides a transparent development methodology, giving users full control over the Spotfire tool and ensuring that insights are actionable and trustworthy. How A4E Enhances Forecasting and Valuations Efficient Well Selection and Production Review: A4E's Forecasting and Valuations Workflow leverages Python and R data functions, enabling users to efficiently assess the potential of new deals. The workflow includes advanced well selection and production review capabilities, ensuring that every decision is backed by robust data analysis. Advanced Decline Curve Analysis (DCA): The DCA Autocast feature utilizes start-stop segment and outlier detection, processing up to five wells per second in both streams. This rapid fitting process is crucial for identifying production opportunities and making timely decisions. Comprehensive Normalization: By employing multivariate and principal component analysis on completion and production attributes, A4E normalizes production data to create more accurate type curves. Users can generate various types of curves, including Average, Mean, P-hat, Percentile, and Sum, providing a detailed understanding of well performance. Type Curves: Users can generate various types of curves, including Average, Mean, P-hat, Percentile, and Sum, with and without normalization providing a detailed forecast of potential well performance. Probabilistic Forecasting with MCMC: Markov Chain Monte Carlo (MCMC) methods are used to evaluate well and type curve parameter distributions, adding a layer of probabilistic forecasting that accounts for uncertainties and enhances the reliability of predictions. In-depth Economic Analysis: A4E integrates economic evaluations for wells and type curves, with case model controls for investments, expenses, production metrics, and commercial terms. This functionality supports comprehensive economic case comparisons and generates detailed 40-year cash flow projections. Seamless Data Integration: A4E offers easy plug-and-play connections to all well header and production data sources, ensuring seamless integration and continuous data flow for real-time analysis. Well Spacing and Gun Barrel Plot Well Spacing Tool: Well spacing is a critical factor in optimizing production and EUR. The A4E Well Spacing tool automates spacing measurements, offering an easy-to-use interface in Spotfire. It calculates well density using user-defined ellipses, analyzes parent-child relationships and degradation over time, and provides traditional X, Y, Z spacing measurements. This comprehensive approach supports optimal well placement and optimize resource extraction. Visualization and Probability Analysis Probability Plot: The Probability Plot template complements other A4E tools by providing detailed visualizations for Oil & Gas data. It allows users to create probability and comparison plots, unlocking powerful insights from A4E outputs. The Probit Plot template offers a free-form interface for visualizing key metrics such as EUR, Arps Fitting Parameters, Actual and Modeled Production, and all economic measures. Conclusion The Analytics for Energy (A4E) suite of Spotfire templates by Blue River Analytics delivers a set of analytic tools with unparalleled speed and flexibility, addressing the unique needs of the oil and gas industry. By providing advanced forecasting and valuation tools, A4E empowers professionals to make data-driven decisions, optimize well performance, and maximize returns. With its transparent, non-black-box approach, A4E ensures that insights are not only powerful but also actionable, delivering advanced analytics into the hands of subject matter experts. Final Note This article is part of collaboration with Blue River Analytics to present their A4E solution in Dr. Spotfire Session. Resources - https://youtu.be/hXm1UXpEqQ4?si=OFvJdoM5U2sk0Wfb
  2. Introduction Spotfire Vision to be Vertical Focused Enabling vertical use cases in Energy, HTM and Manufacturing is one of Spotfire's main vision pillars. Historian systems can be found abundantly in these verticals and in other verticals as well. AVEVA PI Systems have a large footprint. Spotfire PI Custom data sources can connect to PI Asset Framework and PI Data Archives, pull data and update existing tables in Spotfire through a parameterized approach that enables the user to change the query parameters interactively. By the end of this article you will know how to access AF through Spotfire and what are the available options to pull and process the data. Why PI Asset Framework The PI Asset Framework provides a flexible approach to model physical or logical objects, and enables the review of assets and their associated data in the most appropriate way that fits the use case and allows the identification the components of a process, establish relationships between them, and organize them according to your business needs. For instance, suppose you have a pump that produces various data streams such as power consumption, vibration, fluid volume, impeller speed, oil temperature, and pressure, each based on different parameters. By combining various data streams, such as high energy consumption and low water flow, you can pinpoint a worn-out impeller as the cause of the problem. With the right data streams, PI Asset Framework can even provide its own analytics. PI Asset Framework will enable the end user to quickly pull a group of tags according to a predefined set of categories. Below an example of how AF helps us to breakdown an equipment (pump) to simple logical parts. Imagine querying PI Data Archive without leveraging AF, SMEs will need to know the exact name of tags instead which can be a very time and effort consuming activity. Using PI Asset Framework can help creating modularized and reusable solutions that can be used in analytical pipelines without worrying about changing Tag Names, for instance you can apply a specific analysis pipeline to one element, as we will see in the How AF can help us to build OSI PI queries on the fly section. PI Asset Framework industry use cases Energy and Utilities: AF is used to manage and monitor power generation, transmission, and distribution assets, such as turbines, generators, transformers, and switchgear. It provides real-time visibility into asset performance and enables predictive maintenance by looking at several components at once, reducing downtime and improving overall asset efficiency. Oil and Gas: AF is used to manage and monitor offshore and onshore assets such as drilling rigs, pipelines, refineries, and storage facilities. It provides visibility into equipment health, facilitates condition-based maintenance, and helps operators optimize asset performance, reducing costs and improving safety. Manufacturing: AF is used to manage and monitor manufacturing assets, such as assembly lines, conveyors, robots, and packaging machines. It helps optimize production processes, reduce downtime, and improve product quality by providing real-time visibility into equipment performance and enabling predictive maintenance. Chemicals: AF is used to manage and monitor chemical processing assets, such as reactors, distillation columns, and heat exchangers. It provides visibility into asset performance, helps optimize production processes, and facilitates condition-based maintenance, reducing downtime and improving safety. Healthcare: AF is used to manage and monitor hospital assets, such as medical equipment, patient monitoring systems, and HVAC systems. It helps improve patient safety, optimize energy consumption, and reduce maintenance costs by providing real-time visibility into asset performance. How AF can help us to build OSI PI queries on the fly One way AF can help build OSI PI queries on the fly is by providing a standardized hierarchy of asset elements that can be used to organize data from different data sources. This hierarchy can be used to create dynamic queries that can be modified in real-time to accommodate changes in the underlying asset structure or data sources. In the example below, the AF was pulled to Spotfire using PI Asset Framework Details View custom datasource, and presented as an interactive visualization mod in Spotfire where users can pick and choose what they want to input to the PI Archive Data Function in a Element wise style instead of single PI Tags. Making sense of PI Asset Framework data and PI Explorer Let’s take a real life example of PI Asset Framework hierarchy, assume we have a plant that has several facility sites (Facility#1, Facility#2..) each location has several pumps, each pump has a shaft, each shaft has two bearings, and we want to monitor the temperature of the front bearing. PI Asset Framework can describe physical entities or logical processes, and they can even be part of a process or describe a whole logical process. Using Spotfire PI Connector to Access AF Data In Spotfire, there are two ways to query AF Data: Using GUI Start by clicking (+) in Spotfire ---> Other ---> OSISoft PI Asset Framework Enter your server details and connect. Move to the second tab and select the elements and attributes you want to involve in your query Finally choose the Data Retrieval Params. The output table will look like this sample below: Using Custom Data Function Using Spotfire OSIPI Custom Data Source provides more flexibility to pull PI Data interactively and in a more parametrized way. To access this data source, go to Tools ---> Create Asset Framework Details View Fill up the parameters as below, as seen below most of them were parametrized. How does AF query output look like in Spotfire As described above the data output from Asset Framework has around 31 columns. Some of these important columns that are needed in creating queries: PI Tag: Specific PI Tag that is assigned to an element or property. Path: Where is a specific PI Tag is stored Element Template: Refers to the library template that this specific PI Tag was assigned to. Element Parent Name: If the element is a sub from a higher element, parent element will be mentioned here. Attribute Name: What is being measured in this tag. Putting it all together, query PI Interactively We saw that using PI Asset Framework to pull tags from PI Asset Framework Server without the need to know the tag name. Using the meta model or the logical hierarchy to pull the list of tags that we are interested in. Using Spotfire interactivity we can query PI Data Archive by using hierarchal visualizations of PI Asset Framework to send queries back to PI Data Archive. Below is an example of using Spotfire List Mod to visualize the PI Asset Framework and using that will form a query to the PI Data Archive, this data will be flattened and converted into a wide format to be used downstream in different time series analysis using Spotfire DSML Package. OSIPI.mp4 Still interested to learn about OSIPI Applications in Spotfire? Checkout these resources: - Accessing PI Data in Spotfire (Article) - Accessing PI Data in Spotfire (Dr. Spotfire Session)
  3. https://support.tibco.com/s/article/Tags-are-lost-after-data-table-refresh-in-TIBCO-Spotfire
  4. Hi Aki, if you have the value of the intersection defined, there are several ways to achieve that: 1- Using Lines and Curves Option. 2- Creating a Custom Label Column that has Nulls and a value only at the intersection and use that to display label and also to show marker.
  5. Hi @Dylan Daniels I would be happy to help, please reach out to dr@spotfire.com
  6. Welcome to our new community Malanis, Would you please main discussion topics/ideas you would like to talk about.
  7. Hi Xavier, Take a look at our Wafer Maps recognition work which involved ML and 2D data. https://community.spotfire.com/s/article/Wafermap-Pattern-Recognition
  8. Background Story and Motivation: Imagine having two dashboards, both of them monitors the same set of pumps. First monitors the electricity consumption and fluid flow over time. The other dashboard monitors the pressure and performance in general. Imagine something like: Problem and use case Our use case was user starts with dashboard A to check one of the pumps say P1, and then they need to check another variable for the same pump in dashboard B, so we need to show P1 only in dashboard B and nothing else!! Configuration Blocks in Spotfire Configuration Blocks in Spotfire is a way to pass or adjust some features in a non-opened dashboard (.dxb file) at the time of opening. Imagine you have a dashboard with 100 curves and you want to open this dashboard for one specific curve, this is one of many applications. In general, configuration blocks can control/adjust the following aspects of a .dxb file: SetPage: Specifies the initial page of an analysis ApplyBookmark: Applies a bookmark SetFilter: Specifies the initial setting of a filter of an arbitrary type SetMarking: Initializing a marking Setting a document property Checkout the one of the many use cases when you utilize the Document Property along with Python Script to trigger a different Python Script for each Scenario. Where to use Configuration Blocks: There are 5 places where you can use the configuration blocks: As C# extension In Web pages using JS API In Spotfire Analyst Client using command line In Automation Services using the 'Open Analysis' Task In Web URL when accessing the dashboard from the web (Web Player) We will focus on the last task in Article, injecting variables in the URL when opening a dashboard through the web browser. Web URL structure: URL Encoding In order for the configuration block feature to work properly, you need to URL encode everything after the (=) in the structure above. To do so there are many websites, I used this website in my example. It?s recommended that you write all your configuration statements, encode them and paste them after the (=) sign. Here is an example on how to encode a setPage configurator to make the dashboard go to "HOME" page once it's opened. Step1: URL encode your statement Step2: Assemble your URL (Do not forget the semicolon at the end of each configuration statement) Assembeled URL is: http://yourSpotfireServer.com/pathToYourFileAndFileName&configurationBlock=setPage(PageTitle%3D%22HOME%22); the URL below is an example of passing the Page Title, Table name and filtering. https://spotfire-next.cloud.tibco.com/spotfire/wp/analysis?file=/Samples/Introduction%20to%20Spotfire&configurationBlock=SetPage(pageTitle=%22Filtering%22);SetFilter(tableName=%22World%20Bank%20Data%22,columnName=%22Region%22,values=%7B%22North%20America%22,%22Europe%20%26%20Central%20Asia%22%7D);
  9. Introduction In Oil and Gas, specifically well completions; the data generation is relatively fast and and volumes are large. Being able to harvest information from this large amount of datasets or to automate business logic that can saves us time and effort and also impact our downstream decisions. One of these areas is well completions, where data from different stages and sources is generated in high rate and large volumes. Using tools like Spotfire coupled with programming languages like Python, can make a huge difference and gives us more flexibility in automating many of the time and effort consuming tasks to be completed without any intervention, in which we can make of the time to make sure that the logic that we came up with agrees with the domain knowledge. What is Hydrofracturing Hydrofracturing is one of the many unconventional resources completion procedures. It involves creating a permanent micro fractures that increase the formation permeability since these tight reservoirs are known for being a very non-permeable hydrocarbon bearing formations. These micro-fractures are kept open by using micro-particles called (sand), their sizes range according to the reservoir properties. The hydrofracturing process involves high pressure pumping schedules in a relatively short time intervals, which makes it very useful to perform analysis on the fly or as data comes in. Source: Wikipedia What is the treatment plot The animation below shows a live view of the treatment plot in addition to the current operation's important parameters and activity. It also uses some conditional bases ruling to predict the current operation and which well is currently being worked on. The treatment plot can be coupled with further features to detect any anomalies or unexpected behavior. The treatment curve or the pressure vs. time plot has several distinguished sections as it showing in the plot below, looking at the following plot we can see that formation breakdown starts at Pc where fractures will start forming and accept the pumped slurry for the following period, the fall down period is identified but large pressure drop after a period of pumping. This section of the curve is used to estimate the instantaneous shut-in pressure. Source: Janusz Makówka, ReserachGate What is the ISIP and why it's important After the end of the pumping period, instantaneous Shut-in Pressure occurs immediately. The importance of the ISIP is that it can tell the fracture closure pressure. The value is close to fracture propagation pressure and more than the fracture pressure. There were several approaches to estimate the ISIP from fall-off data, some of them are described in SPE-191465-18IHFT-MS. The ISIP can give us a very good idea about the fracture gradient in the vicinity and will help us reduce the uncertainity in future frack jobs design. The ISIP is also a proxy to the reservoir potential. Using Spotfire Well Completion App to estimate the ISIP Spotfire along with Python native support has made it easy to implement/automate any of these methods and harvest more value from the abundant data we have in a very short time. In this example we have implemented the early and late departure methods to estimate the ISIP with the help of our recently develop spotfire-dsml python package. Resources: Demo: https://demo.spotfire.cloud.tibco.com/spotfire/wp/OpenAnalysis?file=5c4be3ed-2354-4bcb-9857-b00b9adbd2d5) More about our Spotfire-DSML Python Package: https://community.spotfire.com/s/article/Python-toolkit-for-data-science-and-machine-learning-in-Spotfire
  10. What is Dynamic Time Warping In time series analysis, dynamic time warping (DTW) is an algorithm for measuring similarity between two temporal sequences, which may vary in speed. For instance, similarities in walking could be detected using DTW, even if one person was walking faster than the other, or if there were accelerations and decelerations during the course of an observation. DTW has been applied to temporal sequences of video, audio, and graphics data - indeed, any data that can be turned into a linear sequence can be analyzed with DTW. A well-known application has been automatic speech recognition, to cope with different speaking speeds. Other applications include speaker recognition and online signature recognition. It can also be used in partial shape matching applications. Every index from the first sequence must be matched with one or more indices from the other sequence, and vice versa. The DTW is considered alignment-based metrics that rely on a temporal alignment of the series in order to assess their similarity. The dynamic time warping usage is not limited to the temporal data, but it can be with any sequence of data, and matter of fact we can even ignore the timestamp index in the data set that we are warping since it won?t be needed in the calculation. In Euclidean based distance metric or similarity measure compares the pairs in a 1-1 fashion as we can see on the left picture, while the Dynamic Time Warping which is an example of alignment-based metric it tries to scan the two time series for temporal matching so the matching procedure is not limited to the correspondent element only, and we will see this also can be an issue since the matching procedure can go more than desired. Note how DTW matches distinctive patterns of the time series, which is likely to result in a more sound similarity assessment than when using Euclidean distance that matches timestamps regardless of the feature values. It is also worth mentioning that the DTW can be applied to any sequence based. Two simulation skeletons walking in different speed. DTW will be used to study the similarity between the two movements We can see in this animation that the algorithm tries to re-align both time series or let?s say both sequences by minimizing the Euclidean distance between the two, we can also use any other distance measure other than the Euclidean to what serves our use case. The process of re-aligning these sequences is the core of the dynamic time warping algorithm. The animation explains how the DTW tries to align two time series. In addition to the details below, please also find a TAF23 video presentation on this topic by Atheer AlAttar from the Spotfire Data Science team, starting at 11mins 05secs into this video. Algorithm Assumptions and Mathematical Representation If we have two series A and B, then: Referring to the mathematical model above: Where X, X? are the time series 𝜋 is the alignment path And the target function is to minimize the distance between X, X? pairs. DTW temporal spans Dynamic Time Warping is invariant to time shifts, whatever their temporal span. In order to allow invariances to local deformations only, one can impose additional constraints on the set of admissible paths. In practice, global constraints on admissible DTW paths restrict the set of possible matches for each element in a time series, the matching width is called the warping window sometimes. As stated above, setting such constraints leads to restricting the shift-invariance to local shifts only. Typically, DTW with a Sakoe-Chiba band constraint of radius r is invariant to time shifts of magnitude up to r, but is no longer invariant to longer time shifts, and we can see here the time shift invariance is limited only in the window range, or the radius range only. Applications in Oil and Gas Production Profile Similarity An oilfield generally consists of hundreds of oil wells or more, which have been produced for several years or decades. If these oil wells are analyzed one by one, the workload will be tremendously huge. The dynamic time warping algorithm provides a method to quickly classify oil wells. Oil flow rates of oil wells are very different, but several representative curves can be abstracted as references. By matching actual well output curves with the references, the category of oil wells can be decided based on the similarity between actual output curves and reference type curves. Automatic Depth Matching From left to right: First the query, secondly the reference and thirdly the warping obtained with the first order Parametric Time Warping. Finally, the warping is obtained with the piecewise linear parametrization. In red the reference is plotted. In the top the total cross-correlation between the curves is shown. Dynamic Time Warping can also be used to identify formation tops and depth match multi wells. ROP Optimizing Motivation Knowing the optimum ROP plays a vital role in the drilling process (cost saving, risk mitigation) We already have some completed wells to the one we are currently drilling These drilled wells are sources of information we can use to have an idea about the most optimum ROP The DTW helps us to answer the question of which well is the most similar to the one we are drilling Process The suggested logic is as follows: Select target well - nearby wells are highlighted automatically Align target and nearby wells by depth using DTW - on the feature level Identify the most similar well to target well, and deltas between at current and future depths Inputs: Location of nearby wells, well logs Outputs: Location of target and similar wells Prediction of time to reservoir (via RoP analysis) Spotfire Implementation and Data Function Implementing DTW using fastdtw package is straightforward, below is an example of a starter data function that can be used in Spotfire. def dtw_calc(s1, s2, distance = 'euclidean'): """ Calculates similarity using Dynamic Time Warping Input: s1 (real): A series represents sequential values. s2 (real): A series represents sequential values. distance (str): any distance metric other than the default. normalized (boolean): whether to normalize the resulting score. Returns: score (real): A scalar describes similarity score between the two provided series. associations (int): A dictionary that contains the associations between the pairs from the two series. log (str): a variable used to share error messages with the end user.- """ from scipy.spatial.distance import euclidean import fastdtw # drop nan values s1.dropna(inplace=True) s2.dropna(inplace=True) # calculating the dtw distance, path = fastdtw.fastdtw (s1, s2) return distance, path
  11. Welcome to TAF23 Hackathon! Welcome, brilliant hackers, to The Analytics Forum 2023 Hackathon, where we'll be delving deep into the issue of food deserts! We're thrilled to have you all join us in this collaborative effort to explore innovative solutions and insights. By leveraging your diverse skills and expertise in data analysis, we aim to better understand the complex factors contributing to food deserts and help bridge the gaps in food accessibility. Together, let's harness the power of data to make a tangible impact on millions of lives and create a more equitable and sustainable future for all. Let the hacking begin! #TAFHACK You can register for The Analytics Forum here. The Problem: Mapping Food Deserts in the Houston Area The history of using geospatial mapping goes back in time to 1854 when a severe cholera outbreak happened in Broad Street near Soho in London that killed 616 people. The physician John Snow is best known for his hypothesis about water contamination being the source of the pandemic. Looking at the public pumps installed on water wells in the area, Snow mapped the deaths spatially around these pumps as dots. The initial results showed that some of the pumps have more deaths clustering than others, which confirms his theory about water contamination. He also uses statistics to show connections between the water source when it?s brought from sewage-polluted areas and cholera outbreaks. Snow?s approach to representing the data geospatially and correlating that with public health was a turning point in epidemiology history, and it influenced and urged the construction of improved sanitation facilities. (?On the Mode of Communication of Cholera? by John Snow, originally published in 1854 by C.F. Cheffins, Lith, Southampton Buildings, London, England. The uploaded image is a digitally enhanced version found on the UCLA Department of Epidemiology website, Public Domain, https://commons.wikimedia.org/w/index.php?curid=2278605.) The Hackathon Our subject is similar (in concept) to what John Snow did back in 1854, except now we have more advanced tools to collect, cleanse, analyze, and present the data. In our hackathon, you will study, analyze, and map the food deserts in the city of Houston using provided census data. A food desert is an area, typically in urban or rural communities, where there is limited access to affordable and nutritious food and groceries. This means that the people living in these areas have difficulty finding and purchasing fresh fruits, vegetables, and other healthy food options. Food deserts often occur in areas where there are few supermarkets or grocery stores. This lack of access to healthy food can lead to poor nutrition and diet-related health problems, such as obesity, diabetes, and heart disease. Food deserts can be caused by a variety of factors, including socioeconomic status, distance to grocery stores, and not having accessible transportation whether personal or public. Efforts to address food deserts may include the establishment of community gardens, farmers' markets, and mobile markets, as well as the expansion of public transportation and the opening of new supermarkets and grocery stores in underserved areas. There is no specific ask in the hackathon; instead, there is a goal, which is understanding and making sense of the food deserts, what could be the reason behind their existence, and how can you as a hacker help policymakers and public health officials eliminate or mitigate the effect of food deserts, using the data you have. Making use of Spotfire Mods and Data Functions is encouraged. (A picture showing the new location of the replica pump, the handle of which John Snow had removed.) Impact Understanding the mapping of food deserts is a great way to identify areas with limited access to healthy and nutritious food and fresh groceries. It also helps policymakers, officials in decision-making positions, and community organizations prioritize areas that are most in need of interventions to improve access to healthy food options. Understanding the causes and extent of food deserts can also help us develop effective solutions to address the problem. For example, if the lack of access to healthy food is due to a lack of transportation options, initiatives to improve public transportation, add more bus stops on food deserts, or provide mobile markets could be effective solutions. Alternatively, if the problem is due to a lack of supermarkets in the area, initiatives to incentivize or support the opening of new supermarkets in underserved areas could be effective. In addition, mapping food deserts can help to raise awareness about the issue, mobilize resources and support to address it, or show where to focus on providing community services. By identifying specific areas where the problem is most acute, we can focus efforts and resources to make a real difference in improving access to healthy food and reducing diet-related health problems. (There are approximately half a million Houstonians living in food deserts.) The Dataset The datasets used in the hackathon are coming from censuses in 2020 and 2021. They contain a variety of information ranging from food stamp data, household-related demographics, and more relevant data. Dataset Description Neighborhood Characteristics Different information on tract level (Source: opportunityatlas.org) Texas Census Tracts 2020 Cleaned up different variables on tract level Texas Census Tracts Populations 2020 Cleaned up different variables on tract level TX Food Stamps 2020 Data about food stamps recipients on tract level for 2020 TX Non-Vehicle 2020 Data about households that don’t have access to a car up to 2020 TX Poverty 2020 Poverty line data on a tract level City of Houston Bus Stops Dataset that includes lat/long of bus stops Poverty Data Anonymous data following 20 million Americans from childhood to their mid-30s. Source (The Opportunity Atlas) Judging Criteria Criteria Description Weight Innovation and Creativity The dashboard design should demonstrate innovation in the use of data and data visualization techniques, pushing the boundaries of what is possible with data analysis, and should have a clear impact on the end-user, providing insights and driving decision-making based that helps in mitigating the problem of food deserts and any related problem. 25% Visual Storytelling, quality of the execution, and presentation The visualizations should be easy to understand and provide insights that are not immediately obvious from the data itself. The judges will see a short video (<5 minutes) or a slide deck by you along with your Spotfire creation. The quality of the presentation in conveying your ideas will be judged. 25% Interactivity, functionality, and user experience Easiness of use, navigation, and performing different interactive functions (i.e. filtering, drill-down) 25% Impact and real-world vitality Eventually, the goal is to make a difference in the real world. Ideas that target high-impact problems and offer feasible solutions will win high points in this category. 25%
  12. I see you are accessing PI connector through the GUI, is there any reason you can't use the custom data function? Have you checked the tutorial here
  13. If you want your PI function start time to come from a document property, you won't need a python script, you just need to map your start time to your document property. If that's not what you are trying to do please elaborate.
×
×
  • Create New...