Решения
Кто использует Directual и почему?
Что можно создать на платформе?
🇷🇺
So, let's get started. Information was collected on the platform and then, using our editor, we built a dashboard to display that information in chart form.
Next to the dashboard is a form that allows you to customize filters. For example, you can select specific time windows and answers to questions from the questionnaire, allowing you to create analytical reports.
Here you can see how many people answered that they are total rookies, how many learned about Directual from search engines or blogs, and what they've built in the last few days. There is also a button to reset the filter settings and show a default analysis for the last 30 days.
Great. Now let's look at how it all works. We get the questionnaire responses in JSON form from the front end. Next, a scenario parses this data into three fields, two string and one boolean and stores the responses in three separate fields. The JSON also carries information about whether the survey was aborted
We have three data structures (think of them as a bit like reference books). They hold the objects, which are the answers to the questions. There’s one structure for each step of the questionnaire. Each structure has an ID and a title - that’s its visible name. You can prepare the data using either a scenario or reports. Take a look at three reports that work with the cloud users' data structure.
Five parameters are used for filtering. Dates are set to default values. We’ve added fields from the cloud users' structure, which are are comprised of three answers to the questions, registration date, and ID.
Next, we configure the filters. Registration date must be greater than date_from. That's the parameter name. Registration date must be less than date_to. Then the answer to the question in step one
Because we're using an array here, we use the IN operator, so we can filter by multiple parameters.We must use the same operator for the answers to the questions in steps 1, 2, and so on. Then we aggregate this report by the first answer and count the number by ID, because each one is unique.
The report creates a data structure and it looks like this. We counted the number of users from their answers and here we have the structure for the second and third reports. Now we need to display it on a chart. To do this, go to the Plugins tab and install the chart plugin from the marketplace.
Go to the Web Pages tab, choose Dashboard and here, on the second tab, we have three charts. They all have the same settings except for the endpoint. As you can see, they’re pie charts with a height of 400 pixels, the legend, and the tooltip. The angle between segments is 3 degrees. The inner radius is 100 pixels and the outer radius is 120 pixels. It looks like a ring. Here we select the option to show the first objects (in our case, there are 5) from the respective endpoint. For the Filters tab, our pie charts have none.
Note that we have an HTML component in the header. It displays some information from the object. These are objects from the Refresh Stat Filters structure with the ID of 1. It shows us the current filter settings. On top of the HTML component, we have an endpoint, which returns exactly one object with the ID of 1 and displays data from it as a string.
We have a form that creates objects in the same structure (there are no fields at all, we fill in one field automatically – and reset everything). I’ll explain this later using a scenario. And the second form simply stores the filter we want to apply: from, to, and three array links with drop-down lists. Press SUBMIT and the filters update.
Let's look at the scenario that handles it all. Here's the object we just created with data and filters. Note that one filter is configured, while the others are set by default. Let's look at the Welcome Survey Stats scenario.
The first step simply counts all users who registered during a specified time window, regardless of the questionnaire. That's the first number. Next, a link to the object with the ID of 1 is thrown into the context variable. Explicitly. Then we check if we want to reset all statistics. If not, we write all the questionnaire answers into context variables. In survey step 1, the context variable is all_step1, in survey steps 2 and 3 they look like this: all_step2 and all_step3, respectively.
We simply search the data structures (the reference books mentioned earlier). Next, similar to the first step, we count the number of users who registered and completed the survey during our time window (that is: the survey_is_aborted field is not = true).
Next, we write all the fields, including the formatted date, through a context variable with a reference to object number one to display them nicely on the front end.
For that, we use the Moment.js library. Then, on the first object displayed on the front end, we send the data about the number of users.
Next, if we have filters configured for the steps, we save them, if not, we write all the options we found and wrote as context variables into this field. We use this JS script expression. You see, here, we have the options, and here, only the first one.
Next, we call all three reports in sequence and send the filter configuration with them. Then we call the Socket.IO plugin with the dashboard ID. This page will be updated for all users.
Finally, let's look at the survey stats update scenario. We have another check to see if the ID is equal to one. If it is, we reset all the settings and show users for the last 30 days and run the same object in the scenario to rebuild the reports. And we set the schedule to run this scenario every day and update the statistics.
Here's the form and now you see that the Reset Filters button is back. And one last thing; let's look at the result tab settings. Here we have an automatic resubmission, which means the button will be back after three seconds.
That's it. I hope you’ve found it useful. Don’t miss our next video!
Присоединяйтесь к 22 000+ разработчикам на Directual и создавайте проекты быстрее и дешевле. Визуальный интерфейс упрощает разработку, а мощные базы данных и бэкенд делают масштабирование легким и эффективным.