• Connection error with server. Refresh the page.
  • Activity or inactivity time has been reached. Refresh the page

THE STORY OF PLAUDERE

image
[+]

Close

English

English

Image Text

image image

0

image
[+]

Close

Joe Esteves

Joe Esteves
image
[+]

Close

about 10 months ago

about 10 months ago

image

532 views

image
[+]

Close

Professional

Professional

#livestreaming #engineering #plaudere


Leer en Español.

Context and background:

To understand how the idea of Plaudere started and how the website was developed, I look back and remember all the previous experiences I was part of: the curiosity that built up over time, the attempts that did not work completely well, and the tools that at one moment seemed to fit, but later had to be reconsidered until the idea finally became the base of something slightly different. In my case, the story of Plaudere is closely connected to the way I approached working with computers over the years, to my professional journey, to the changes I experienced in different companies and places, and also to a part of my life that was always present: my interest in music and in creating music.

It was in the late 2010s and early 2020s that I started to understand web development at both backend and frontend level. But reaching that point was not something sudden. Before I could build a website like Plaudere, I had to develop a very specific foundation: first, I needed to understand how software works; then, to understand the basics of databases; after that, to learn how to use different types of languages, starting with pseudocode, continuing with script based languages such as VBA in the early stages, then programming languages like JavaScript, query languages like SQL, and markup languages such as HTML and CSS, and finally to understand how an application idea is analysed, designed, and brought into reality, from the initial concept to the prototype, from the first attempt to continuous improvement, from an initial version to a more mature one, and so on.

That path is rarely linear. Many times it means changing approach, removing features, adding new ones, rewriting parts of the project, changing tools, adjusting the architecture and starting again almost from scratch, but with better judgement and a little more experience. A formal education in the subject helps a lot, of course. Even though I had knowledge of software and design, in many stages of my life I had to take the longer road. I learned through tutorials, courses, books, trial and error, corrections, and patience. Sometimes I relied too much on one tool or one approach; other times I had to rethink everything, realise that something was not working, and start again.

Looking at it now, I think that process was an essential part of what Plaudere is today. And, interestingly, after building the website, it also allows me to talk about it and about the process that made it possible. My knowledge of development did not really begin with Plaudere. Before that, there were many attempts to build applications based on databases and interfaces. I believe there has always been the same obsession in me: to contribute something useful. To create, whether at work or in my personal life, an application that other people see as something valuable, that makes their life or their work a bit easier, and helps them do things a bit better, faster, or more clearly.

Building my first applications:

That idea of helping others through an application first became real during a professional placement in the mid-2000s at a supermarket chain company. I remember that stage clearly, as it was one of the first moments when I began to understand the real value of applied software. At that time, I was in my fourth year of industrial engineering and looking for an internship. Eventually, I joined a company that needed a marketing intern in Lima.

My role was to support a manager who analysed transactional data from the company’s supermarkets to produce reports on frequent customers and evaluate their response to different promotional campaigns. At the beginning, the experience was challenging. My Microsoft Excel skills were limited, and it was difficult to transform transactional data into structured, visual information in the form of aggregated KPIs. In addition, the work was highly repetitive, as reports were based on fixed structures that were regularly updated with new data. A significant part of my time was spent on manual preparation and updating in Excel.

This situation led me to focus on two specific aspects of the problem. The first was understanding the logic behind the KPIs: which ones were truly relevant, how they were calculated, how they were aggregated, what data sources supported them, how the templates were structured, and where errors or redundancies could appear in the process. The second was reducing the time spent on manual tasks in order to focus more on analysis.

It was in this context that I discovered VBA, a scripting language integrated into Microsoft Excel. It enabled the automation of repetitive actions through code, allowing multiple operations to be executed with a single command, such as modifying cell values, performing in-memory calculations, refreshing pivot tables, and updating visual elements.

Without realising it, I was entering a field that would later become the foundation of my work with data. Over time, I moved on to modern data management frameworks, cloud services, and dashboard platforms. However, this early experience was fundamental, as it introduced a key principle: a solution is not sufficient if it only works technically; it must also be understandable and usable by the end user.

At first, the KPI automation in Excel mainly benefited me, as it reduced operational time and allowed me to focus on analysis. However, adoption was not immediate. It required explaining the redesign of the reporting approach and adjusting how the team interacted with the dashboard. Over time, the team began to recognise the value of the redesigned reporting application and the capabilities provided by the macros, such as adjusting calculation parameters, modifying visualisations, highlighting key values, and updating datasets more efficiently.

Later, after learning VBA both independently and through formal training, I was able to build a basic button-based interface for navigating sections of the updated report. I also introduced small improvements such as dynamic calculation changes, section navigation, and informational messages. Although simple, this solution allowed other team members, including the department manager, to use it for deeper analysis of the reported data.

Continuous learning by building applications:

From there, I built similar solutions in an insurance company during another internship, and also in a general services company, as well as in other experiences in the procurement department of a telecommunications company. In all of those contexts, I created Excel macros that became more visual, cleaner and more hidden in terms of how they were built. I also started saving metadata about usage, connecting to databases such as Access and recording or querying the information that the Excel application or macro used.

At the end of the 2000s and the beginning of the 2010s, at least in the corporate environments where I worked, Microsoft Excel had a lot of power. And even more so when it was used more like an application than like a simple spreadsheet. The teams I worked with became faster, the data was collected in a more structured way, and calculation errors were reduced when results were shown. Of course there were iterations, version changes and constant adjustments, but even so I always felt very satisfied when those Excel applications really helped.

This topic interested me so much that I ended up doing my undergraduate thesis in industrial engineering on how to simplify the framework of applications analysis and development so that business areas and business users could apply those principles and quickly develop complex applications that automated data analysis and result interpretation. In addition, I also worked as a teaching assistant in the Systems Analysis and Design course at the university where I graduated, where I was able to deepen and consolidate my knowledge until the early 2010s.

The first contact with web development:

Up to that point, my experience was based on a limited set of tools: VBA, Excel macros, Access databases, and systems analysis and design knowledge used to build applications by combining these components. However, in a later role within a telecommunications company, in the customer service and complaints department, the volume of commercial data I needed to analyse to understand the root causes of issues led me to work with a different type of solution.

In that context, I started working with Visual Basic (not to be confused with VBA), meaning there was no predefined framework for user interaction. It was necessary to build a complete application from scratch, defining elements such as the interface, window layout, buttons, navigation flow, installation process, libraries, authentication mechanisms, and security layers. In addition, a key part of the application involved integration with databases such as Microsoft SQL Server, using a structured copy of customer contract data generated through an internal IT process. On top of this, analysts added supplementary information not captured by the department's main system, enabling operational corrections in the complaints process.

This application was well received and became an important tool within the complaint resolution workflow, improving efficiency and reducing repeated incidents.

More information about customer complaint management in the post "Predicting customer complaints".

At that point, a significant shift occurred in my understanding of software development. Within the same department, another team members proposed a different approach: instead of Visual Basic, they built a web application using an Apache server and PHP, connecting to the same SQL Server database. The goal was to extend the reach of the solution, as the desktop application had limitations in terms of compatibility and deployment across user machines.

Moving to a web-based solution allowed wider access to the application and improved its integration into the daily work of the analysis team. In practical terms, it helped standardise the invoice correction process and reduce recurring complaints.

Until then, web applications were still something technically interesting but not fully explored in practice for me. I had some exposure to PHP, HTML and CSS code and contributed in a limited way to the development. Shortly after, and before starting an MBA in Belgium after being accepted into a business school, I witnessed the deployment of this solution and its evolution within the organisation.

I remember observing how a web application could run directly in a browser, connect to a database, and operate within internal business workflows. This experience changed my perception of software architecture, as up to that point my focus had been mainly on desktop applications and local automation.

However, at that time I did not go deeper into web development. My focus was shifting towards business and management, not because I had lost interest in development, but because I wanted to broaden my professional profile. Years later, I would return to this path, this time with a broader perspective and closer alignment with modern software development practices. Ultimately, this journey would converge in the creation of the Plaudere platform.

Consolidating applications development:

Computing was almost asleep inside me during most of the 2010s. In the MBA I learnt a great deal about many subjects that are important for the general management of a company. And although I was doing an internship in a company that carried out market research on the potential of a new technology product, it was impossible not to present the results of a cost simulation using automations made with VBA and Excel. That made it easier for me to present results and work through different financial scenarios, taking into account the optimisation of product spare parts using a Solver linear programming model.

To this was added a new logistics experience in Lima, where I developed forecasting models and a process based on automating calculations and integrating them into supply processes. And there was even more. In two later experiences in Spain, one in a publishing company and another in a media agency, I still needed my VBA experience because it was very useful to organise and present data from different businesses.

In summary, during a good part of my professional life, VBA, Excel, Access, MySQL, SQL Server and even AutoIt and VB were the tools I used most often. I was always connected in one way or another to business data, and I always applied a layer of calculation automation and information presentation, hiding options and automations through buttons and an interface that over the years became simpler and more precise. I even used the servers where the data was hosted in an inventive way so that I could later ensure viable concurrency, without breaking the application and while minimising technical errors.

First steps in web development:

However, it was at the end of the 2010s, already in the publishing company, that I remembered again that experience in telecommunications where I had seen that similar, and even more robust, applications could be built using web technologies. Of course, there was a big entry barrier, because it meant going into more technical details about how those technologies work, and in my experience I had not needed to learn them before. So I had not yet developed that knowledge.

Therefore, I went to the library of my hometown and took a book to start reading about web technologies, yet I soon realised there was a considerable knowledge barrier. I kept changing books but still could not see how I would be able to build, in a short time, even a very basic website that would prove to me that I really could do it. After several books and even a few web development courses, I did manage to build simple exercises and start to understand the foundations of HTML, CSS, and JavaScript. These technologies were very different from my previous experience with VB and SQL, but somehow I knew I could do it, because it was exactly the same feeling I had years before when I started to explore VBA and Excel in my first internships.

The satisfaction of seeing my first HTML, CSS and JavaScript code appear in the browser was a feeling that pushed me to learn more and more. Then the need to learn backend appeared, because HTML, CSS and JavaScript belonged to the frontend. Understanding the tag language of HTML, the cascading styles of CSS and the first ideas of JavaScript for controlling frontend logic takes time, practice and consistency. But once you have some knowledge, the immediate challenge is to work on the backend.

My first prototypes showed that I had very little interest in going deep into backend, and at the beginning I saw it as almost unnecessary, trusting too much in the processing power that I could achieve from the frontend. The dream of building a website without using backend did not last long. Soon I understood how necessary it was to manage, for example, the connection with the database and, at minimum, to process the requests that came from users in order to load the page. At the start I made a minimum viable code to handle website requests, usually from localhost. Understanding the difference between localhost and production can also be confusing at the beginning, but it takes patience and study of the foundations of how a website works so that, finally, from a basic frontend and backend architecture, you can find the road that lets you turn an application idea into a real website.

For at least one year I studied, not full time but whenever I had time available, all those basic components of a website and how it really works, as well as the structure of code that later lets you grow. And this is another important point, because I believe it takes patience and practice to understand how your website reacts to demand, and how it can be programmed efficiently so that its growth is viable. You have to think about control, load, style, the data that is recorded, the options, the visual organisation and how the backend behaves when it receives read or write requests.

The reality is that there are many options in web development. I found an endless number of frameworks, tools, APIs, ways to structure the application, ways to organise the code and, later, also matters of content security and backend security. That is why I think one decision was decisive for my progress: I chose to use the smallest possible number of helpers, such as React or Bootstrap, to name a few, and I tried to use a more open style of programming, using JavaScript, HTML, CSS and Node with Express for the backend. This is commonly known as Vanilla JavaScript. In principle, it is a style of programming that I recommend when you start to learn the basics, because once you have created some web projects, it becomes much easier to know which tools you can use to speed up development and reuse code. So I kept going in that direction.

Motivation for web development and streaming:

But when I reflected on the main reason why I started to take an interest in web development, I realised that my motivation was not really to stop using VBA or Microsoft Excel. What I truly wanted was to understand how multimedia on the web worked and to build some application with those capabilities. Of course, first you need to learn the basic tools of web development, including CRUD operations, that is, create, read, update and delete records in a database through a website, as well as understanding the routing of the different pages in an application and some kind of backend programming to personalise the views delivered to the user.

However, when I began to explore how applications such as video calls, on-demand streaming videos like YouTube, or online audio and video transmissions from a streamer to an audience work, I felt completely lost at first. I found very little information online to understand the foundations of streaming in web development. It was not clear information, or it only led to finished solutions using special streaming servers. So I asked myself whether it was possible to integrate streaming capabilities into a normal website. And that was how I slowly began to develop the idea of Plaudere.

As I had some knowledge of music and had also had previous experiences with streaming applications for musicians, including one called Sofa Session, that experience came back to my mind when I was learning the basics of how sound can travel from the sender to the receiver by using WebSockets, the Web Audio API and the getUserMedia API, which allowed me to connect to the user’s devices, capture chunks and send them through WebSockets to the receiver, and then play them back with a basic buffer and assisted playback through the Web Audio API. Thanks to that I was able to apply the principles of a basic audio streaming application. I think that was my real “hello world” experience in this field.

I had the Sofa Session experience in mind, and I knew how interesting it is to connect musicians who share music with other people. So I decided to build an application around that audio streaming experience, and later also video. I kept learning about CRUD applications, but I also kept thinking about how audio streaming could be improved. And even more, one of the big challenges I set myself, taking into account the Sofa Session experience, was to find a cheaper and faster way to let at least two musicians combine their instruments and show a single sound to a possible audience.

The idea of Plaudere:

After experimenting on my own with web technologies, and after some walkie-talkie style prototypes and very basic CRUD-style social networks, I decided to take the step and shape Plaudere. At first it was called Willaitec, a name that came from the verb to communicate, which is willay in Quechua, combined with “tec”, as an abbreviation of technology. Later it became Plaudere, which comes from Latin and refers to applause. At one point I came up with the idea of creating an applause button in one of the first versions of the website, and I saw that an applause option was really the same as a like in social networks. I liked that idea, and that is how I ended up naming the project Plaudere.

However, getting to the current version was not easy. I had to go through a fast and sometimes difficult evolution, but always with the clear goal of building a hybrid social application that could help music creators build shared live broadcasts.

First prototype of Plaudere:

As soon as I saw that my logic for controlling the audio reception buffer worked, taking into account the response to different network errors from users and the impact that had on a larger or smaller buffer, and therefore on the quality of the multimedia information transmitted, I felt that I was ready to build my first Plaudere prototype between 2020 and 2022. Using Express to organise the views, SQL for user CRUD operations and WebSockets for live transmission, and once I learned how to move information from one user to another through WebSockets, I gave life to the first version of Plaudere.

That first version was a multipage website, with the ability to create live audio streams and create spaces, and allowed the order of streamers to be selected. In other words, the first user generated live audio streaming and, if a second streamer appeared, this second streamer absorbed the signal from the first and prevented it from reaching the audience, mixing both streams to emit a final combined signal that gave the impression that both musicians were streaming together. However, that approach did not allow the first streamer to hear the second.

That was when I started to investigate more deeply. I realised that trying to build an application where audio could be transferred in real time so that both streamers could hear each other, collaborate and generate one single live audio transmission meant, on the one hand, almost no latency between the capture device and the code, and then almost instant transfer to the receiving user. From the moment audio is created until it is distributed, whether to the audience or to the second streamer, and from that second streamer back to the first and then to the audience, if the delay goes beyond 30 milliseconds, serious musical synchronisation problems appear. In other words, the difference or delay in audio reception and reaction is so high that it would confuse the musicians, and a live musical combination would no longer be viable.

That is something I learnt after some research into the possibilities of building a live audio streaming application for jamming. And given the complexity and the investment required, I chose to keep evolving my approach towards something like a sound staircase: first the audio is generated by the first streamer, then it is mixed in the second streamer before reaching the audience. The final result is the same, but I have avoided the complexity of synchronisation, which could become very expensive if it had to be applied at the level of computing power and user devices.

More information about live content synchronisation in Plaudere in the post "Syncing live content".

First version of Plaudere.
First version of Plaudere.

Second prototype of Plaudere:

The first prototype depended on remembering the user state through the URL pages of the multipage website, and I found it difficult to keep that state across pages. In fact, there was no authentication mechanism until the third prototype. In that version, the user entered their name and could identify themselves in the application and create content, but that content was lost because the website did not save anything: it only streamed the information live as it was generated.

That was why I started to think about a better version of the website pages, because the first one was very basic, and I thought that a SPA model, a single page application, could make it easier to recognise the user, their content and the information shared in the chat and the streaming they generated. That is how the second prototype was born.

I developed that second prototype in order to apply the SPA concept to control the Plaudere website. But, as far as I could research and prototype, at least for the type of application I wanted, the frontend had to work too hard to show the right parts of the website without changing the main URL, which meant more resources on the user side to render the page, and also less room for audio reception and emission operations in live streaming.

For at least two years, during 2022 and 2023, I tried to make the SPA code more efficient, trying to move as many operations as possible to the website JavaScript worker. But I did not get a satisfying result. My code became difficult to maintain and, depending on the device, the website was not viable to display correctly. In addition, it caused confusion for users, because they could not easily go back unless you programmed special logic to capture the back action and return the application to a previous state.

In the end I was not happy with that prototype, but I did understand something important: the website had to be light and the frontend operations had to be minimal so that it would not become overloaded. And that was how I returned to a multipage approach, similar to the first version.

Second version of Plaudere.
Second version of Plaudere.

Third prototype of Plaudere, current version:

Going from the single page application prototype back to a multipage model was dramatic. I saw that I could not reuse my previous code except for the streaming part, and I decided to build a third prototype that would truly go beyond the limits of the previous versions. This time the goal was clear: for the first time, to authenticate the user through Google or Microsoft accounts, so that they would not need to create an account with email and password, and all that this usually involves.

The third prototype also aimed to improve the visual side of the earlier versions, trying to make the application more friendly and simpler. I also set myself another important challenge: the two previous prototypes did not keep permanent user information, such as preferences, name, account or content that the user might create. So I brought CRUD operations back, this time placing the streaming experience inside a blog and article model so that, in addition to audio live streaming and, for the first time, also video live streaming, users could create articles and static content that would give context to the transmissions and to the spaces they built.

This third prototype is the current one, and I worked on it during 2024, 2025 and 2026. It is the active version of the website. It currently uses cookies to store some basic user preferences, such as dark or light mode and the language of the interface, which now has three versions: English, Spanish and Catalan, as a tribute to the Spanish autonomous community where I live. It also uses a MongoDB database, which I have used experimentally not only for less frequent operations such as content creation and editing, but also to power the streaming, through server operations that query the database of chunks generated in transmissions on a regular basis and, through cache, deliver that information on demand to users who request access to the transmissions.

It also allows users not only to transmit through their devices, but also through static content such as audio or video. And it uses Markdown technology to allow articles with rich content, including styles, titles, subtitles, bullet points, embedded content, comments, replies to comments and basic reactions. In addition, it includes authentication through Google or Microsoft, so that the user does not need to create a new account to use Plaudere, but can log in with the provider they prefer using existing accounts.

More information about creating a live streaming prototype in Plaudere in the post "Creating a live streaming prototype".

New things I researched in this third prototype:

There are new things I researched during this third prototype. For example, in the middle of web development, the arrival of ChatGPT and, more generally, generative artificial intelligence, which is a recent phenomenon, found me working on the second prototype and moving towards the third. Not having an agentic layer in a service can now become an important risk, because it is seen as part of the current state of the art. It was not very clear how I could apply an agentic layer in this service, but after researching deeply the possibilities of building a basic agent during 2024 and 2025, I was able to integrate a simple model using Qwen 1.5 0.5B.

More information about the Plaudere AI agent in the post "AI assistance with small language models"

I had to adapt the interface to give the user a space where they could use that agent. In this case, the agent is used to answer simple questions about the website, to answer basic questions about the articles created by users, and also about user profiles, while of course limiting the information that comes from the platforms and only giving access to information that users usually add to their profile, such as preferences, reviews and similar data.

Conclusion and future vision:

I am very satisfied with this new prototype, which I also had to adjust in order to maximise compatibility with other browsers besides Chrome, such as Edge, Firefox, Brave and even Safari. Safari creates big challenges in terms of APIs, JavaScript, security, restrictions on workers and options, and especially when making sure that the CSS style stays consistent across browsers. The tests carried out so far show that the website works in its current version, with all its functions and restrictions.

During the deployment of the website through the plaudere.com domain, I started to notice that robots from different technology companies were browsing the website. Some of them found my website and started checking the HTML links that exist in the different pages, which allowed some security gaps to be found and then solved step by step. In this way I kept protecting the information on the website, while also applying regular optimisations to the MongoDB database to make sure the experience stays good, and managing backend restrictions on CRUD operations to avoid involuntary data loss and also to limit the website resources so that an attack cannot create traffic beyond the limits of the hosting.

These are all important topics to think about when working with a website. And that is one of the biggest lessons from this whole process: always keep improving the website, making it better, more efficient and more secure for users. Plaudere was not born only as an application. It was born as an idea built over many years, as a way to bring together computing, data, music, collaboration and a real need. And in the end, I think that is what gives it meaning.

This is, in essence, the story of Plaudere. But it is also the story of how I have been learning to build, to make mistakes, to correct, to start again and to keep moving forward until I reached a website that now represents all that journey.

Example text

Example text

Example text

Example text

Example text

More info

image
[+]

Close

example-test

example-test

image
[+]

Close

Just now

Just now

Example text

image
[+]

Close

example-test

example-test

image
[+]

Close

Just now

Just now

Example text

image
[+]

Close

example-test

example-test

image
[+]

Close

Just now

Just now

Example text