Category: Blog

  • catsleep

    catsleep

    A lightweight tool that reminds user to take break after a certain time period.

    CatSleep

    CatSleep image source: personalitycafe

    1. Introduction

    catsleep is a tool for people working long hours in computer specially, programmers, software engineers and other IT professionals. Sometimes, we are so much engaged in a task that we forget to take break or have a walk that would be very harmfull to our both mental and physical health. So, we initiated this tiny effort for making a tool that would remind the person/user after a certain time to take a break, a refreshment so that s/he can concentrate to their work with a fresh mind and it would help them to keep their mental and physical health sound.


    2. Installation

    2.1 Install in Linux

    2.1.1 Installation

    • Install using the following command-
    git clone https://github.com/faruk-ahmad/catsleep
    cd catsleep
    bash install.sh
    • Restart your computer to get it worked.

    2.1.2 Uninstallation

    • Uninstall catsleep tool using the following command
    cd /home/<user>/catsleep
    bash uninstall.sh
    • Provide “y” if you are prompt for approval to remove some config files.
    • Restart your computer to finish the uninstallation process.

    2.2 Install in MacOS

    git clone https://github.com/faruk-ahmad/catsleep
    # you can run the catsleep/main.py file to run the application
    # installer not available yet

    2.3 Install in Windows

    git clone https://github.com/faruk-ahmad/catsleep
    # you can run the catsleep/main.py file to run the application
    # installer not available yet

    3. Configuring your catsleep

    The user configuration file resides in the user home directory if you are using linux based OS like ubuntu. It is a hidden file and named as .catsleep_config.json If you want to change any default behavior like, the interval in between alarms/notifications or want to switch the voice, then you need to edit this configuration file.

    Open this file with any text editor and it looks like-

    User Configuration file

    The parameters in the configuration are as follows

    Parameter Explanation Possible Values Effect
    interval_minutes Interval in between alarms/notifications, integer value given as minutes 1 to infininte change the interval in alarms/notification
    frequency_number Number of consecutive alarms/notifications in a slot 1 to infinite Make multiple consecutive alarms at each alarm
    frequency_interval_minutes Gap in minutes in between consecutive alarms at a time 1 <= value < interval make multiple alarms after this given time
    play_audio_on Trigger On/Off audio message as alarm “yes” for on, “no” for off will turn on/off audio message in notification
    show_text_on Trigger On/Off text notification as alarm “yes” for on, “no” for off will turn on/off text bubble in notification
    play_beep_on Trigger On/Off beep sound as alarm “yes” for on, “no” for off will turn on/off beep sound in notification
    voice_mode Switch in between male and female voice mode “male”, “female”, “random” will change the audio message mode
    • The notification or alarm works good with all the three [beep, audio & text] on.

    See more detail configuration in Example doc


    4. Getting Started


    5. Features


    5.1 Features avaiable

    • Notification to take break in works
    • Customization in interval, alarm frequency
    • Audio message, mode switching

    5.2 Features in Queue

    • User can customize audio messages, text messages and beep sounds
    • Multiple different interval in between alarms
    • Extract task list from linked calender by user and set alarms base on tasks

    6. Report Issues


    Before you report an issue in github, please make sure you are in the same pace with the up to date commit in our github repo.

    7. How to Contribute


    You can contribute in the one or either way of the following-

    7.1 Bug Reporting

    -- You can report a bug by adding issue in github
    -- Or you can contribute by sharing how you solved and issue
    

    7.2 Requesting a Feature

    -- If you came accross any new idea that can be added as feature
    

    7.3 Adding Feature, Pull Request

    -- If you come with new idea of a feature and add it
    -- Send us pull request
    

    7.4 Adding Nofification Resource

    -- Adding some new cool audio file for notification
    -- Adding some amazing text message for notification
    -- Adding some amazing beep sound for notification
    
    Visit original content creator repository https://github.com/faruk-ahmad/catsleep
  • catsleep

    catsleep

    A lightweight tool that reminds user to take break after a certain time period.

    CatSleep

    CatSleep image source: personalitycafe

    1. Introduction

    catsleep is a tool for people working long hours in computer specially, programmers, software engineers and other IT professionals. Sometimes, we are so much engaged in a task that we forget to take break or have a walk that would be very harmfull to our both mental and physical health. So, we initiated this tiny effort for making a tool that would remind the person/user after a certain time to take a break, a refreshment so that s/he can concentrate to their work with a fresh mind and it would help them to keep their mental and physical health sound.


    2. Installation

    2.1 Install in Linux

    2.1.1 Installation

    • Install using the following command-
    git clone https://github.com/faruk-ahmad/catsleep
    cd catsleep
    bash install.sh
    • Restart your computer to get it worked.

    2.1.2 Uninstallation

    • Uninstall catsleep tool using the following command
    cd /home/<user>/catsleep
    bash uninstall.sh
    • Provide “y” if you are prompt for approval to remove some config files.
    • Restart your computer to finish the uninstallation process.

    2.2 Install in MacOS

    git clone https://github.com/faruk-ahmad/catsleep
    # you can run the catsleep/main.py file to run the application
    # installer not available yet

    2.3 Install in Windows

    git clone https://github.com/faruk-ahmad/catsleep
    # you can run the catsleep/main.py file to run the application
    # installer not available yet

    3. Configuring your catsleep

    The user configuration file resides in the user home directory if you are using linux based OS like ubuntu. It is a hidden file and named as .catsleep_config.json If you want to change any default behavior like, the interval in between alarms/notifications or want to switch the voice, then you need to edit this configuration file.

    Open this file with any text editor and it looks like-

    User Configuration file

    The parameters in the configuration are as follows

    Parameter Explanation Possible Values Effect
    interval_minutes Interval in between alarms/notifications, integer value given as minutes 1 to infininte change the interval in alarms/notification
    frequency_number Number of consecutive alarms/notifications in a slot 1 to infinite Make multiple consecutive alarms at each alarm
    frequency_interval_minutes Gap in minutes in between consecutive alarms at a time 1 <= value < interval make multiple alarms after this given time
    play_audio_on Trigger On/Off audio message as alarm “yes” for on, “no” for off will turn on/off audio message in notification
    show_text_on Trigger On/Off text notification as alarm “yes” for on, “no” for off will turn on/off text bubble in notification
    play_beep_on Trigger On/Off beep sound as alarm “yes” for on, “no” for off will turn on/off beep sound in notification
    voice_mode Switch in between male and female voice mode “male”, “female”, “random” will change the audio message mode
    • The notification or alarm works good with all the three [beep, audio & text] on.

    See more detail configuration in Example doc


    4. Getting Started


    5. Features


    5.1 Features avaiable

    • Notification to take break in works
    • Customization in interval, alarm frequency
    • Audio message, mode switching

    5.2 Features in Queue

    • User can customize audio messages, text messages and beep sounds
    • Multiple different interval in between alarms
    • Extract task list from linked calender by user and set alarms base on tasks

    6. Report Issues


    Before you report an issue in github, please make sure you are in the same pace with the up to date commit in our github repo.

    7. How to Contribute


    You can contribute in the one or either way of the following-

    7.1 Bug Reporting

    -- You can report a bug by adding issue in github
    -- Or you can contribute by sharing how you solved and issue
    

    7.2 Requesting a Feature

    -- If you came accross any new idea that can be added as feature
    

    7.3 Adding Feature, Pull Request

    -- If you come with new idea of a feature and add it
    -- Send us pull request
    

    7.4 Adding Nofification Resource

    -- Adding some new cool audio file for notification
    -- Adding some amazing text message for notification
    -- Adding some amazing beep sound for notification
    
    Visit original content creator repository https://github.com/faruk-ahmad/catsleep
  • plex-beetbrainz

    plex-beetbrainz

    Submit your listens from Plex to ListenBrainz. Integrates with beets
    for that important metadata.

    Why

    I want to track my music activity to ListenBrainz. As I use Plex for music playback,
    I was kind of stuck with no options. I dabbled into Jellyfin and actually
    adapted the Last.fm plugin for ListenBrainz.
    However, Jellyfin still does not offer the same user experience level as Plex does (especially on mobile),
    so I still don’t use it primarily.

    There is eavesdrop.fm,
    but it doesn’t submit the track metadata (as Plex doesn’t provide them),
    which means that the submitted Listens are not linked to MusicBrainz database entries.

    As I want to future proof my experience with ListenBrainz,
    I want to submit as much data as possible – so when new features are introduced, the metadata can be used, if applicable.

    How to run

    Naturally, the beets integration only works if you have a beets library somewhere available
    and you also use beets to manage your music library (to avoid no matches).
    If you decide to edit artist/album/track names, you need to do so in both Plex and beets, so the metadata match will be possible.
    If you want, you can also run this app without beets integration. In that case, only the data provided by Plex will be submitted.


    There are two ways to run this app.
    The first one is running the binary as-is. The second option is to use docker image.
    There are 64-bit binaries provided for macOS, Linux and Windows.
    The binaries are available here.

    The docker image is only available for 64-bit Linux platform and is available here

    If you need another platform, you can easily compile the app or build the docker image yourself.
    Refer to Building section for more information.

    Before you run the app, make sure you have all environment variables set as described below in the Configuration section. To configure the webhook itself, read the following section.

    Webhook configuration

    Starting from version 1.2, you can choose between Plex or Tautulli webhooks. Obviously, each webhook “type” has its own advantages and disadvantages.

    Plex webhook is easier to set up, you just configure the URL and that’s it. But, you also need Plexpass to use webhooks at all.

    Tautulli webhooks are a bit more work, unlike Plex. First disadvantage is that you of course need the Tautulli itself. If you don’t know about it, you can learn more here. Another disadvantage is – as already mentioned – that it’s just a bit more work to configure it. However, Tautulli does not require Plex pass for most of the stuff it does and you also have a more control over the webhooks themselves. Another advantage is that the webhooks seem to be reliable unlike Plex ones, but that’s something I did not extensively test.

    Each webhook setup is described in the following sections.

    Plex

    To configure Plex webhook, go to your PMS webhook settings and create a new webhook pointing to the IP address or host where this app is running, together with the port (default 5000) and /plex path. For example: http://localhost:5000/plex.

    Note that for the listen submission, a Plex’s media.scrobble is used. This event does not conform to the ListenBrainz’s specification for listen submission (4 minutes or half of a track).

    Tautulli

    If you want to use Tautulli instead of Plex for webhooks, you need to properly configure the webhook in Tautulli. You can create a webhook in Tautulli under Notification Agents in Settings. Following sections explain every tab of the webhook configuration in a detail.

    Configuration tab

    The webhook URL is the same as if you’d use Plex, however the path is /tautulli. For example: http://localhost:5000/tautulli. The webhook method should be set to POST.

    Triggers tab

    Select Playback Start and Watched. Optionally, you can also select Playback Resume if you want your now playing status at Listebrainz to be a bit more precise. Watched percentage can be configured in the General settings. According to Listenbrainz guidelines, the Music Listened percentage should be set to 50%, but of course, that is everyone’s own decision. Personally, I left it at 85%, which is the default value.

    Conditions tab

    You can skip this tab entirely, but if you want to minimize network traffic for some reason, then you can limit the webhook requests to be sent only if the Media Type equals to track and you can also allow only webhook events for users which are configured in the beetbrainz app. That said, these checks exists on the app side anyway, so this is completely optional as already mentioned.

    Data tab

    For each trigger selected in Triggers tab, paste this JSON string into JSON Data field:

    {
      "action": "{action}",
      "user_name": "{username}",
      "artist_name": "{artist_name}",
      "album_name": "{album_name}",
      "track_name": "{track_name}",
      "track_artist": "{track_artist}",
      "media_type": "{media_type}"
    }

    And that’s all. Don’t forget to save the changes and you should be good to go.

    Beetbrainz configuration

    There are few configuration options, all set via environment variables.
    These variables are:

    • USER_TOKENS
      • Comma separated list of <user>:<listenbrainz token> pairs for configuration and submission.
      • The user key must correspond to Plex user, not ListenBrainz user.
      • The user matching is performed case-insensitively.
    • BEETS_IP
      • IP address of the Beets web application. If not set, the beets integration is simply disabled.
    • BEETS_PORT
      • Port of the Beets web application. Defaults to 8337. Has no effect if BEETS_IP is not set.
    • BEETBRAINZ_IP
      • Bind to specified IP of an interface. If not set, all interfaces will be used (0.0.0.0).
      • Applies only to IPv4. IPv6 is not supported.
    • BEETBRAINZ_PORT
      • Listen on the specified port. Defaults to 5000.

    Building

    If you need the binary or docker image on your specific platform, or just simply want to compile the code (or build the docker image) yourself…

    Compilation

    This app is written in Go language, so you need to set up Golang development environment first. Refer to this guide for more information.
    After you have everything set up, simply clone this repository and run go build. This should produce a binary named plex-beetbrainz in the current directory.

    Building the docker image

    Image build is two-staged to minimize the image size. At first, a Golang image is downloaded to build the app and then the binary is copied into a distroless image.

    Provided you have docker installed, simply clone the repository and run docker build . -t <your_image_tag>.

    Visit original content creator repository
    https://github.com/lyarenei/plex-beetbrainz

  • videosdk-rtc-android-java-sdk-example

    🚀 Video SDK for Android

    Documentation Firebase Discord Register

    At Video SDK, we’re building tools to help companies create world-class collaborative products with capabilities for live audio/video, cloud recordings, RTMP/HLS streaming, and interaction APIs.

    🥳 Get 10,000 minutes free every month! Try it now!

    ⚡️From Clone to Launch – Get Started with the Example in 5 mins!

    Java

    📚 Table of Contents

    📱 Demo App

    📱 Download the sample Android app here: https://appdistribution.firebase.dev/i/99ae2c5db3a7e446

    ⚡ Quick Setup

    1. Sign up on VideoSDK to grab your API Key and Secret.
    2. Familiarize yourself with Token

    🛠 Prerequisites

    📦 Running the Sample App

    Step 1: Clone the Repository

    Clone the repository to your local environment.

    git clone https://github.com/videosdk-live/videosdk-rtc-android-java-sdk-example.git

    Step 2: Open and Sync the Project

    Open the cloned project in Android Studio and perform a project sync.

    Step 3: Modify local.properties

    Generate a temporary token from your Video SDK Account and update the local.properties file:

    auth_token = TEMPORARY-TOKEN

    Step 4: Run the sample app

    Run the Android app by pressing Shift+F10 or by clicking the ▶ Run button in the Android Studio toolbar.

    🔥 Meeting Features

    Unlock a suite of powerful features to enhance your meetings:

    Feature Documentation Description
    📋 Precall Setup Setup Precall Configure audio, video devices, and other settings before joining the meeting.
    🤝 Join Meeting Join Meeting Allows participants to join a meeting.
    🚪 Leave Meeting Leave Meeting Allows participants to leave a meeting.
    🎤 Toggle Mic Mic Control Toggle the microphone on or off during a meeting.
    📷 Toggle Camera Camera Control Turn the video camera on or off during a meeting.
    🖥️ Screen Share Screen Share Share your screen with other participants during the call.
    🔊 Change Audio Device Switch Audio Device Select an input-output device for audio during a meeting.
    🔌 Change Video Device Switch Video Device Select an output device for audio during a meeting.
    ⚙️ Optimize Audio Track Audio Track Optimization Enhance the quality and performance of media tracks.
    ⚙️ Optimize Video Track Video Track Optimization Enhance the quality and performance of media tracks.
    💬 Chat In-Meeting Chat Exchange messages with participants through a Publish-Subscribe mechanism.
    📸 Image Capture Image Capturer Capture images of other participant from their video stream, particularly useful for Video KYC and identity verification scenarios.
    📁 File Sharing File Sharing Share files with participants during the meeting.
    🖼️ Virtual Background Virtual Background Add a virtual background or blur effect to your video during the call.
    📼 Recording Recording Record the meeting for future reference.
    📡 RTMP Livestream RTMP Livestream Stream the meeting live to platforms like YouTube or Facebook.
    📝 Real-time Transcription Real-time Transcription Generate real-time transcriptions of the meeting.
    🔇 Toggle Remote Media Remote Media Control Control the microphone or camera of remote participants.
    🚫 Mute All Participants Mute All Mute all participants simultaneously during the call.
    🗑️ Remove Participant Remove Participant Eject a participant from the meeting.

    🧠 Key Concepts

    Understand the core components of our SDK:

    • Meeting – A Meeting represents Real-time audio and video communication.

      Note: Don't confuse the terms Room and Meeting; both mean the same thing 😃

    • Sessions – A particular duration you spend in a given meeting is referred as a session, you can have multiple sessions of a specific meetingId.

    • Participant – A participant refers to anyone attending the meeting session. The local participant represents yourself (You), while all other attendees are considered remote participants.

    • Stream – A stream refers to video or audio media content published by either the local participant or remote participants.

    🔐 Token Generation

    The token is used to create and validate a meeting using API and also initialize a meeting.

    🛠️ Development Environment:

    • You may use a temporary token for development. To create a temporary token, go to VideoSDK’s dashboard .

    🌐 Production Environment:

    • You must set up an authentication server to authorize users for production. To set up an authentication server, please take a look at our official example repositories. videosdk-rtc-api-server-examples

    🧩 Project Overview

    App Behaviour with Different Meeting Types

    • One-to-One meeting – The One-to-One meeting allows 2 participants to join a meeting in the app.

    • Group Meeting – The Group meeting allows any number of participants to join a meeting in the app.

    🏗️ Project Structure

    • We have created 3 package screens and widgets in the following folder structure:
      • OneToOneCall – It includes all classes/files related to OneToOne meetings.
      • GroupCall – It includes all classes/files related to the Group meetings.
      • Common – It includes all the classes/files that are used in both meeting type.

    1. Pre-Call Setup on Join Screen

    • DeviceAdapter.java : This is a custom RecyclerView.Adapter used to display a list of audio devices. It takes a list of devices and a click listener to handle item clicks. Each item shows the device name and an icon.

    • bottom_sheet.xml : This layout defines the structure of the bottom sheet dialog, which contains a RecyclerView that displays the list of items. The RecyclerView fills the available space and references list_items_bottom_sheet for its individual list items.

    • list_items_bottom_sheet.xml : This layout defines how each item in the bottom sheet looks. It contains a LinearLayout with an ImageView for the device icon, a TextView for the device label, and another ImageView for a checkmark icon. The checkmark is used to indicate the currently selected device.

    2. Create or Join Meeting

    • NetworkUtils.java – This class is used to call the API to generate a token, create and validate the meeting.

    • CreateOrJoinActivity.java and activity_create_or_join.xml : This Activity allows users to either create or join a meeting. It manages microphone and webcam permissions and handles UI interactions like enabling/disabling audio and video. It also switches between the CreateMeetingFragment and JoinMeetingFragment, depending on user actions.

    • CreateOrJoinFragment.java and fragment_createorjoin.xml : This fragment provides two buttons for users to either create or join a meeting. On button clicks, it transitions to the respective fragments (CreateMeetingFragment or JoinMeetingFragment) within CreateOrJoinActivity.

    • CreateMeetingFragment.java and fragment_create_meeting.xml : This fragment enables users to create a new meeting by selecting a meeting type (e.g., One-to-One or Group Call) and entering their name. Upon submission, it makes a network request to create a meeting and navigates to the relevant meeting activity.

    • JoinMeetingFragment.java and fragment_join_meeting.xml : This fragment allows users to join an existing meeting by entering a valid meeting ID and their name. It validates input and, on success, navigates to the appropriate meeting activity based on the selected meeting type.

    3. Switch AudioDevice

    • AudioDeviceListAdapter.java : This is a custom ArrayAdapter that displays a list of audio devices in a dialog. It uses a ListItem model to represent each audio device. The layout for each list item is defined in audio_device_list_layout.xml.

    • ListItem.java : This class represents an individual list item (audio device) with properties such as the device name, icon, and a description, and a boolean indicating whether the item is selected.

    • audio_device_list_layout.xml : This layout defines the appearance of each audio device in the list.

    4. Chat

    • MessageAdapter.java ; This is a custom RecyclerView.Adapter for displaying chat messages in a meeting.

    • item_message_list.xml : This layout defines the structure of each chat message in the list. It displays the sender’s name, the message, and the message timestamp.

    5. ParticipantList

    • ParticipantListAdapter.java : This adapter displays the list of meeting participants in a RecyclerView. It includes the local user and updates in real-time as participants join or leave the meeting.

    • layout_participants_list_view.xml : This layout defines the structure for the participant’s list view. It includes a RecyclerView that lists each participant using the item_participant_list_layout.

    • item_participant_list_layout.xml : This layout defines the appearance of each participant in the list. It displays the participant’s name, microphone, and camera status.

    • OneToOneCallActivity.java : OneToOneCallActivity.java handles one-on-one video call, providing features like microphone and camera control, screen sharing, and participant management. It supports real-time chat and meeting event listeners for tasks like recording and screen sharing. The activity also displays session elapsed time and handles permissions for audio, video, and screen sharing.
    • GroupCallActivity.java : The GroupCallActivity class manages the main UI and logic for initiating and maintaining a group video call. It serves as the primary activity where users can join a video call session, toggle mic and camera. It also manages the video grid where all participants are displayed using ParticipantViewFragment and ParticipantViewAdapter.
    • ParticipantViewFragment.java : Displays an individual participant’s video feed and controls within a fragment, updating the UI based on participant state changes.
    • ParticipantViewAdapter.java : Binds participant data to a RecyclerView, dynamically updating the video grid as participants join, leave, or change state.
    • ParticipantChangeListener.java : Listens for participant-related events (join, leave, state changes) and triggers UI updates.
    • ParticipantState.java : Represents the current state of a participant, such as mute and video status, for UI display and logic handling.

    📖 Examples

    📝 Documentation

    Explore more and start building with our Documentation

    🤝 Join Our Community

    • Discord: Engage with the Video SDK community, ask questions, and share insights.
    • X: Stay updated with the latest news, updates, and tips from Video SDK.
    Visit original content creator repository https://github.com/videosdk-live/videosdk-rtc-android-java-sdk-example
  • videosdk-rtc-android-java-sdk-example

    🚀 Video SDK for Android

    Documentation Firebase Discord Register

    At Video SDK, we’re building tools to help companies create world-class collaborative products with capabilities for live audio/video, cloud recordings, RTMP/HLS streaming, and interaction APIs.

    🥳 Get 10,000 minutes free every month! Try it now!

    ⚡️From Clone to Launch – Get Started with the Example in 5 mins!

    Java

    📚 Table of Contents

    📱 Demo App

    📱 Download the sample Android app here: https://appdistribution.firebase.dev/i/99ae2c5db3a7e446

    ⚡ Quick Setup

    1. Sign up on VideoSDK to grab your API Key and Secret.
    2. Familiarize yourself with Token

    🛠 Prerequisites

    📦 Running the Sample App

    Step 1: Clone the Repository

    Clone the repository to your local environment.

    git clone https://github.com/videosdk-live/videosdk-rtc-android-java-sdk-example.git

    Step 2: Open and Sync the Project

    Open the cloned project in Android Studio and perform a project sync.

    Step 3: Modify local.properties

    Generate a temporary token from your Video SDK Account and update the local.properties file:

    auth_token = TEMPORARY-TOKEN

    Step 4: Run the sample app

    Run the Android app by pressing Shift+F10 or by clicking the ▶ Run button in the Android Studio toolbar.

    🔥 Meeting Features

    Unlock a suite of powerful features to enhance your meetings:

    Feature Documentation Description
    📋 Precall Setup Setup Precall Configure audio, video devices, and other settings before joining the meeting.
    🤝 Join Meeting Join Meeting Allows participants to join a meeting.
    🚪 Leave Meeting Leave Meeting Allows participants to leave a meeting.
    🎤 Toggle Mic Mic Control Toggle the microphone on or off during a meeting.
    📷 Toggle Camera Camera Control Turn the video camera on or off during a meeting.
    🖥️ Screen Share Screen Share Share your screen with other participants during the call.
    🔊 Change Audio Device Switch Audio Device Select an input-output device for audio during a meeting.
    🔌 Change Video Device Switch Video Device Select an output device for audio during a meeting.
    ⚙️ Optimize Audio Track Audio Track Optimization Enhance the quality and performance of media tracks.
    ⚙️ Optimize Video Track Video Track Optimization Enhance the quality and performance of media tracks.
    💬 Chat In-Meeting Chat Exchange messages with participants through a Publish-Subscribe mechanism.
    📸 Image Capture Image Capturer Capture images of other participant from their video stream, particularly useful for Video KYC and identity verification scenarios.
    📁 File Sharing File Sharing Share files with participants during the meeting.
    🖼️ Virtual Background Virtual Background Add a virtual background or blur effect to your video during the call.
    📼 Recording Recording Record the meeting for future reference.
    📡 RTMP Livestream RTMP Livestream Stream the meeting live to platforms like YouTube or Facebook.
    📝 Real-time Transcription Real-time Transcription Generate real-time transcriptions of the meeting.
    🔇 Toggle Remote Media Remote Media Control Control the microphone or camera of remote participants.
    🚫 Mute All Participants Mute All Mute all participants simultaneously during the call.
    🗑️ Remove Participant Remove Participant Eject a participant from the meeting.

    🧠 Key Concepts

    Understand the core components of our SDK:

    • Meeting – A Meeting represents Real-time audio and video communication.

      Note: Don't confuse the terms Room and Meeting; both mean the same thing 😃

    • Sessions – A particular duration you spend in a given meeting is referred as a session, you can have multiple sessions of a specific meetingId.

    • Participant – A participant refers to anyone attending the meeting session. The local participant represents yourself (You), while all other attendees are considered remote participants.

    • Stream – A stream refers to video or audio media content published by either the local participant or remote participants.

    🔐 Token Generation

    The token is used to create and validate a meeting using API and also initialize a meeting.

    🛠️ Development Environment:

    • You may use a temporary token for development. To create a temporary token, go to VideoSDK’s dashboard .

    🌐 Production Environment:

    • You must set up an authentication server to authorize users for production. To set up an authentication server, please take a look at our official example repositories. videosdk-rtc-api-server-examples

    🧩 Project Overview

    App Behaviour with Different Meeting Types

    • One-to-One meeting – The One-to-One meeting allows 2 participants to join a meeting in the app.

    • Group Meeting – The Group meeting allows any number of participants to join a meeting in the app.

    🏗️ Project Structure

    • We have created 3 package screens and widgets in the following folder structure:
      • OneToOneCall – It includes all classes/files related to OneToOne meetings.
      • GroupCall – It includes all classes/files related to the Group meetings.
      • Common – It includes all the classes/files that are used in both meeting type.

    1. Pre-Call Setup on Join Screen

    • DeviceAdapter.java : This is a custom RecyclerView.Adapter used to display a list of audio devices. It takes a list of devices and a click listener to handle item clicks. Each item shows the device name and an icon.

    • bottom_sheet.xml : This layout defines the structure of the bottom sheet dialog, which contains a RecyclerView that displays the list of items. The RecyclerView fills the available space and references list_items_bottom_sheet for its individual list items.

    • list_items_bottom_sheet.xml : This layout defines how each item in the bottom sheet looks. It contains a LinearLayout with an ImageView for the device icon, a TextView for the device label, and another ImageView for a checkmark icon. The checkmark is used to indicate the currently selected device.

    2. Create or Join Meeting

    • NetworkUtils.java – This class is used to call the API to generate a token, create and validate the meeting.

    • CreateOrJoinActivity.java and activity_create_or_join.xml : This Activity allows users to either create or join a meeting. It manages microphone and webcam permissions and handles UI interactions like enabling/disabling audio and video. It also switches between the CreateMeetingFragment and JoinMeetingFragment, depending on user actions.

    • CreateOrJoinFragment.java and fragment_createorjoin.xml : This fragment provides two buttons for users to either create or join a meeting. On button clicks, it transitions to the respective fragments (CreateMeetingFragment or JoinMeetingFragment) within CreateOrJoinActivity.

    • CreateMeetingFragment.java and fragment_create_meeting.xml : This fragment enables users to create a new meeting by selecting a meeting type (e.g., One-to-One or Group Call) and entering their name. Upon submission, it makes a network request to create a meeting and navigates to the relevant meeting activity.

    • JoinMeetingFragment.java and fragment_join_meeting.xml : This fragment allows users to join an existing meeting by entering a valid meeting ID and their name. It validates input and, on success, navigates to the appropriate meeting activity based on the selected meeting type.

    3. Switch AudioDevice

    • AudioDeviceListAdapter.java : This is a custom ArrayAdapter that displays a list of audio devices in a dialog. It uses a ListItem model to represent each audio device. The layout for each list item is defined in audio_device_list_layout.xml.

    • ListItem.java : This class represents an individual list item (audio device) with properties such as the device name, icon, and a description, and a boolean indicating whether the item is selected.

    • audio_device_list_layout.xml : This layout defines the appearance of each audio device in the list.

    4. Chat

    • MessageAdapter.java ; This is a custom RecyclerView.Adapter for displaying chat messages in a meeting.

    • item_message_list.xml : This layout defines the structure of each chat message in the list. It displays the sender’s name, the message, and the message timestamp.

    5. ParticipantList

    • ParticipantListAdapter.java : This adapter displays the list of meeting participants in a RecyclerView. It includes the local user and updates in real-time as participants join or leave the meeting.

    • layout_participants_list_view.xml : This layout defines the structure for the participant’s list view. It includes a RecyclerView that lists each participant using the item_participant_list_layout.

    • item_participant_list_layout.xml : This layout defines the appearance of each participant in the list. It displays the participant’s name, microphone, and camera status.

    • OneToOneCallActivity.java : OneToOneCallActivity.java handles one-on-one video call, providing features like microphone and camera control, screen sharing, and participant management. It supports real-time chat and meeting event listeners for tasks like recording and screen sharing. The activity also displays session elapsed time and handles permissions for audio, video, and screen sharing.
    • GroupCallActivity.java : The GroupCallActivity class manages the main UI and logic for initiating and maintaining a group video call. It serves as the primary activity where users can join a video call session, toggle mic and camera. It also manages the video grid where all participants are displayed using ParticipantViewFragment and ParticipantViewAdapter.
    • ParticipantViewFragment.java : Displays an individual participant’s video feed and controls within a fragment, updating the UI based on participant state changes.
    • ParticipantViewAdapter.java : Binds participant data to a RecyclerView, dynamically updating the video grid as participants join, leave, or change state.
    • ParticipantChangeListener.java : Listens for participant-related events (join, leave, state changes) and triggers UI updates.
    • ParticipantState.java : Represents the current state of a participant, such as mute and video status, for UI display and logic handling.

    📖 Examples

    📝 Documentation

    Explore more and start building with our Documentation

    🤝 Join Our Community

    • Discord: Engage with the Video SDK community, ask questions, and share insights.
    • X: Stay updated with the latest news, updates, and tips from Video SDK.
    Visit original content creator repository https://github.com/videosdk-live/videosdk-rtc-android-java-sdk-example
  • videosdk-rtc-android-java-sdk-example

    🚀 Video SDK for Android

    Documentation Firebase Discord Register

    At Video SDK, we’re building tools to help companies create world-class collaborative products with capabilities for live audio/video, cloud recordings, RTMP/HLS streaming, and interaction APIs.

    🥳 Get 10,000 minutes free every month! Try it now!

    ⚡️From Clone to Launch – Get Started with the Example in 5 mins!

    Java

    📚 Table of Contents

    📱 Demo App

    📱 Download the sample Android app here: https://appdistribution.firebase.dev/i/99ae2c5db3a7e446

    ⚡ Quick Setup

    1. Sign up on VideoSDK to grab your API Key and Secret.
    2. Familiarize yourself with Token

    🛠 Prerequisites

    📦 Running the Sample App

    Step 1: Clone the Repository

    Clone the repository to your local environment.

    git clone https://github.com/videosdk-live/videosdk-rtc-android-java-sdk-example.git

    Step 2: Open and Sync the Project

    Open the cloned project in Android Studio and perform a project sync.

    Step 3: Modify local.properties

    Generate a temporary token from your Video SDK Account and update the local.properties file:

    auth_token = TEMPORARY-TOKEN

    Step 4: Run the sample app

    Run the Android app by pressing Shift+F10 or by clicking the ▶ Run button in the Android Studio toolbar.

    🔥 Meeting Features

    Unlock a suite of powerful features to enhance your meetings:

    Feature Documentation Description
    📋 Precall Setup Setup Precall Configure audio, video devices, and other settings before joining the meeting.
    🤝 Join Meeting Join Meeting Allows participants to join a meeting.
    🚪 Leave Meeting Leave Meeting Allows participants to leave a meeting.
    🎤 Toggle Mic Mic Control Toggle the microphone on or off during a meeting.
    📷 Toggle Camera Camera Control Turn the video camera on or off during a meeting.
    🖥️ Screen Share Screen Share Share your screen with other participants during the call.
    🔊 Change Audio Device Switch Audio Device Select an input-output device for audio during a meeting.
    🔌 Change Video Device Switch Video Device Select an output device for audio during a meeting.
    ⚙️ Optimize Audio Track Audio Track Optimization Enhance the quality and performance of media tracks.
    ⚙️ Optimize Video Track Video Track Optimization Enhance the quality and performance of media tracks.
    💬 Chat In-Meeting Chat Exchange messages with participants through a Publish-Subscribe mechanism.
    📸 Image Capture Image Capturer Capture images of other participant from their video stream, particularly useful for Video KYC and identity verification scenarios.
    📁 File Sharing File Sharing Share files with participants during the meeting.
    🖼️ Virtual Background Virtual Background Add a virtual background or blur effect to your video during the call.
    📼 Recording Recording Record the meeting for future reference.
    📡 RTMP Livestream RTMP Livestream Stream the meeting live to platforms like YouTube or Facebook.
    📝 Real-time Transcription Real-time Transcription Generate real-time transcriptions of the meeting.
    🔇 Toggle Remote Media Remote Media Control Control the microphone or camera of remote participants.
    🚫 Mute All Participants Mute All Mute all participants simultaneously during the call.
    🗑️ Remove Participant Remove Participant Eject a participant from the meeting.

    🧠 Key Concepts

    Understand the core components of our SDK:

    • Meeting – A Meeting represents Real-time audio and video communication.

      Note: Don't confuse the terms Room and Meeting; both mean the same thing 😃

    • Sessions – A particular duration you spend in a given meeting is referred as a session, you can have multiple sessions of a specific meetingId.

    • Participant – A participant refers to anyone attending the meeting session. The local participant represents yourself (You), while all other attendees are considered remote participants.

    • Stream – A stream refers to video or audio media content published by either the local participant or remote participants.

    🔐 Token Generation

    The token is used to create and validate a meeting using API and also initialize a meeting.

    🛠️ Development Environment:

    • You may use a temporary token for development. To create a temporary token, go to VideoSDK’s dashboard .

    🌐 Production Environment:

    • You must set up an authentication server to authorize users for production. To set up an authentication server, please take a look at our official example repositories. videosdk-rtc-api-server-examples

    🧩 Project Overview

    App Behaviour with Different Meeting Types

    • One-to-One meeting – The One-to-One meeting allows 2 participants to join a meeting in the app.

    • Group Meeting – The Group meeting allows any number of participants to join a meeting in the app.

    🏗️ Project Structure

    • We have created 3 package screens and widgets in the following folder structure:
      • OneToOneCall – It includes all classes/files related to OneToOne meetings.
      • GroupCall – It includes all classes/files related to the Group meetings.
      • Common – It includes all the classes/files that are used in both meeting type.

    1. Pre-Call Setup on Join Screen

    • DeviceAdapter.java : This is a custom RecyclerView.Adapter used to display a list of audio devices. It takes a list of devices and a click listener to handle item clicks. Each item shows the device name and an icon.

    • bottom_sheet.xml : This layout defines the structure of the bottom sheet dialog, which contains a RecyclerView that displays the list of items. The RecyclerView fills the available space and references list_items_bottom_sheet for its individual list items.

    • list_items_bottom_sheet.xml : This layout defines how each item in the bottom sheet looks. It contains a LinearLayout with an ImageView for the device icon, a TextView for the device label, and another ImageView for a checkmark icon. The checkmark is used to indicate the currently selected device.

    2. Create or Join Meeting

    • NetworkUtils.java – This class is used to call the API to generate a token, create and validate the meeting.

    • CreateOrJoinActivity.java and activity_create_or_join.xml : This Activity allows users to either create or join a meeting. It manages microphone and webcam permissions and handles UI interactions like enabling/disabling audio and video. It also switches between the CreateMeetingFragment and JoinMeetingFragment, depending on user actions.

    • CreateOrJoinFragment.java and fragment_createorjoin.xml : This fragment provides two buttons for users to either create or join a meeting. On button clicks, it transitions to the respective fragments (CreateMeetingFragment or JoinMeetingFragment) within CreateOrJoinActivity.

    • CreateMeetingFragment.java and fragment_create_meeting.xml : This fragment enables users to create a new meeting by selecting a meeting type (e.g., One-to-One or Group Call) and entering their name. Upon submission, it makes a network request to create a meeting and navigates to the relevant meeting activity.

    • JoinMeetingFragment.java and fragment_join_meeting.xml : This fragment allows users to join an existing meeting by entering a valid meeting ID and their name. It validates input and, on success, navigates to the appropriate meeting activity based on the selected meeting type.

    3. Switch AudioDevice

    • AudioDeviceListAdapter.java : This is a custom ArrayAdapter that displays a list of audio devices in a dialog. It uses a ListItem model to represent each audio device. The layout for each list item is defined in audio_device_list_layout.xml.

    • ListItem.java : This class represents an individual list item (audio device) with properties such as the device name, icon, and a description, and a boolean indicating whether the item is selected.

    • audio_device_list_layout.xml : This layout defines the appearance of each audio device in the list.

    4. Chat

    • MessageAdapter.java ; This is a custom RecyclerView.Adapter for displaying chat messages in a meeting.

    • item_message_list.xml : This layout defines the structure of each chat message in the list. It displays the sender’s name, the message, and the message timestamp.

    5. ParticipantList

    • ParticipantListAdapter.java : This adapter displays the list of meeting participants in a RecyclerView. It includes the local user and updates in real-time as participants join or leave the meeting.

    • layout_participants_list_view.xml : This layout defines the structure for the participant’s list view. It includes a RecyclerView that lists each participant using the item_participant_list_layout.

    • item_participant_list_layout.xml : This layout defines the appearance of each participant in the list. It displays the participant’s name, microphone, and camera status.

    • OneToOneCallActivity.java : OneToOneCallActivity.java handles one-on-one video call, providing features like microphone and camera control, screen sharing, and participant management. It supports real-time chat and meeting event listeners for tasks like recording and screen sharing. The activity also displays session elapsed time and handles permissions for audio, video, and screen sharing.
    • GroupCallActivity.java : The GroupCallActivity class manages the main UI and logic for initiating and maintaining a group video call. It serves as the primary activity where users can join a video call session, toggle mic and camera. It also manages the video grid where all participants are displayed using ParticipantViewFragment and ParticipantViewAdapter.
    • ParticipantViewFragment.java : Displays an individual participant’s video feed and controls within a fragment, updating the UI based on participant state changes.
    • ParticipantViewAdapter.java : Binds participant data to a RecyclerView, dynamically updating the video grid as participants join, leave, or change state.
    • ParticipantChangeListener.java : Listens for participant-related events (join, leave, state changes) and triggers UI updates.
    • ParticipantState.java : Represents the current state of a participant, such as mute and video status, for UI display and logic handling.

    📖 Examples

    📝 Documentation

    Explore more and start building with our Documentation

    🤝 Join Our Community

    • Discord: Engage with the Video SDK community, ask questions, and share insights.
    • X: Stay updated with the latest news, updates, and tips from Video SDK.
    Visit original content creator repository https://github.com/videosdk-live/videosdk-rtc-android-java-sdk-example
  • hi-lo

    Screenshot

    Screenshot!

    Backend

    The backend was created using Express, GraphQl and Apollo Server.

    How the game works:

    • It starts with a deck of cards and drawing the first card from that deck and adding it to a pile. The user then has to guess whether the next card’s number will be a higher number or lower number than the latest card. If they are correct, we add that card to the pile.
    • If they are incorrect, the user gets a point for every card that was in the pile at that time (for example, if 10 cards were in the pile, they would get 10 points). And the pile is cleared. After users have 3 successful guesses in a row, they can “pass” to the other player (you only need to support 2 players, and only one can guess at a time). By pass we mean that if you start as Player 1, you can change to Player 2. Player 2 can pass back to Player 1 once they get 3 successful guesses in a row.
    • The player with the least number of points at the end wins.

    Frontend

    ⚠️ Frontend WIP: The frontend is still incomplete, a few bugs with guessing on the 3rd try.

    This project was bootstrapped with Create React App but changed to Vite as it’s faster and easier to use, the app also uses Redux, Apollo Client and Mui react ui library

    How to run the app

    In the project directory, after installing the dependencies you can run:

    npm start

    Runs the app in the development mode using Concurrently to run both the client and server.

    Open http://localhost:3000 to view the client in the browser. Open http://localhost:4000/graphql to view the graphql server in the browser.

    Visit original content creator repository https://github.com/Imtiyaz-CHOUJAI/hi-lo
  • STL10_Segmentation

    STL10 – Segmentation

    Please consider sponsoring this repo so that we can continue to develop high-quality datasets for the AI and ML research.

    To become a sponsor:

    GitHub Sponsors
    Buy me a coffee

    You can also sponsor us by downloading our free application, Etiqueta, to your devices:

    Etiqueta on iOS or Apple Chip Macs
    Etiqueta on Android

    This repo contains segmented images for the labeled part of the STL-10 Dataset.
    If you are looking for STL10-Labeled variant of the dataset, refer here: STL10-Labeled.
    More information on the original STL-10 dataset can be found here.
    Thanks to Martin Tutek, the original STL-10 dataset can be downloaded via the python code in this repo. For convenience, this code is copied in stl10.py in this repo.

    If you use this dataset in your research please do not forget to cite:

    @techreport{yagli2025etiqueta,
      author      = {Semih Yagli},
      title       = {Etiqueta: AI-Aided, Gamified Data Labeling to Label and Segment Data},
      year        = {2025},
      number      = {TR-2025-0001},
      address     = {NJ, USA},
      month       = Apr.,
      url         = {https://www.aidatalabel.com/technical_reports/aidatalabel_tr_2025_0001.pdf},
      institution = {AI Data Label},
    }
    
    @inproceedings{coates2011analysis,
      title     = {An analysis of single-layer networks in unsupervised feature learning},
      author    = {Coates, Adam and Ng, Andrew and Lee, Honglak},
      booktitle = {Proceedings of the fourteenth international conference on artificial intelligence and statistics},
      pages     = {215--223},
      year      = {2011},
      organization={JMLR Workshop and Conference Proceedings}
    }
    

    Note: If you notice any errors and/or if you have comments/ideas relevant to this dataset or Etiqueta in general, please reach me out at contact@aidatalabel.com.

    Instructions

    For Original Data:

    You can download the stl10 image data by running

    python stl10.py

    This will:

    1. create a folder named data download and extract the stl10 dataset inside that folder.
    2. show one example picture in a new window.

    How to read images and their labels are also exemplified inside the stl10.py. For example, you can load all test images and their labels to a numpy array by using:

    import stl10
    
    img_test_X_bin_loc = "./data/stl10_binary/test_X.bin"
    img_test_y_bin_loc = "./data/stl10_binary/test_y.bin"
    
    test_X = stl10.read_all_images(img_test_X_bin_loc)
    test_y = stl10.read_labels(img_test_y_bin_loc)

    Additionally, other useful functions are readily defined inside stl10.py.

    For Segmentation Data:

    Segmentation of each image in the test_X.bin file can be found inside the provided .json files. Note that images that contain more than a single segment are segmented using different labels.
    The combined segmentations can be recovered running:

    python recoverSegmentations.py 

    By default, this will create test_X_segmented.npy, which contains the cutouts of images in the test part of stl10 as depicted in the examples before.

    • Depending on your device specifications, the above should take about ~10 minutes to complete. The segmentation progress will be printed out so that you can go get a coffee while the segmented data is being saved. Feel free to modify the script depending on your needs.

    You can load this numpy array by using:

    from numpy import load 
    
    DEFAULT_SEGMENTED_TEST_X_SAVE_LOC = "./test_X_segmented.npy"
    
    test_X_segmented = load(DEFAULT_SEGMENTED_TEST_X_SAVE_LOC)

    Observe that the arrays inside test_X_segmented are now sparse.

    Enjoy!

    Examples

    Class airplane bird car cat deer
    original airplane bird car cat deer
    segmented airplane bird car cat deer
    Class dog horse monkey ship truck
    original dog horse monkey ship truck
    segmented dog horse monkey ship truck

    Notes:

    We have caught the following errors in the test part of the STL-10 dataset:

    1495: cat_0 mark is in fact a dog_0.
    6417: cat_0 mark is in fact a dog_0.
    1718: cat_1 mark is in fact a dog_0.
    1138: dog_1 mark is in fact a cat_0.
    1484: dog_1, dog_2, and dog_3 are in fact sheep_0, sheep_1, sheep_2.
    6566: dog_0 and dog_1 marks are in fact cat_0, and dog_0.
    7902: dog_0 and dog_1 marks are in fact cat_0, and dog_0.

    Visit original content creator repository https://github.com/semihyagli/STL10_Segmentation
  • seq2covvec

    seq2covvec: Coverage vector generation for binning long reads metagenomic datasets

    🛑 a newer, faster and a complete tool is on the way – https://github.com/anuradhawick/kmertools

    Coverage vector computation algorithm presented in MetaBCC-LR.
    Computations are much faster now due to several improvements we have done. More flexibility for vectors are also included.

    Supports k-mer sizes from 11 to 31. Higher k values demands larger memory. For PacBio HiFi and Oxford Nanopore Q20+ reads, try k values above 19 else use 15 or lower.

    Future improvements to follow include;

    • faster buffered hashing of k-mers, k-mer count thresholds to avoid too abundant, or too scarce k-mers.
    • Memory maps for faster IO.
    • Support for different files for indexing k-mers and computing vectors, e.g: potential use for Illumina assemblies or contigs in general.
    • Code is not clean, so needs -std=c++17 to compile some inline functions. Have to work on this too! Will be fixed with above improvements though.

    Build

    sh build.sh (Linux)
    

    Or

    sh build.sh osx (MacOS)
    

    Check help

    ./seq2covvec -h
    
    usage: seq2covvec.py [-h] --reads-path READS_PATH [--k-size [11-31]]
                         [--bin-size BIN_SIZE] [--bin-count BIN_COUNT]
                         [--threads THREADS] --output OUTPUT
    
    Convert sequences into coverage vectors. Supports k-mer sizes from 11-31.
    
    optional arguments:
      -h, --help            show this help message and exit
      --reads-path READS_PATH, -r READS_PATH
                            Reads path for binning
      --k-size [7-31], -k [7-31]
                            K size for the coverage histogram.
      --bin-size BIN_SIZE, -bs BIN_SIZE
                            Bin size for the coverage histogram.
      --bin-count BIN_COUNT, -bc BIN_COUNT
                            Number of bins for the coverage histogram.
      --threads THREADS, -t THREADS
                            Thread count for computations
      --output OUTPUT, -o OUTPUT
                            Output file name
    

    Citation

    @article{10.1093/bioinformatics/btaa441,
        author = {Wickramarachchi, Anuradha and Mallawaarachchi, Vijini and Rajan, Vaibhav and Lin, Yu},
        title = "{MetaBCC-LR: metagenomics binning by coverage and composition for long reads}",
        journal = {Bioinformatics},
        volume = {36},
        number = {Supplement_1},
        pages = {i3-i11},
        year = {2020},
        month = {07},
        abstract = "{Metagenomics studies have provided key insights into the composition and structure of microbial communities found in different environments. Among the techniques used to analyse metagenomic data, binning is considered a crucial step to characterize the different species of micro-organisms present. The use of short-read data in most binning tools poses several limitations, such as insufficient species-specific signal, and the emergence of long-read sequencing technologies offers us opportunities to surmount them. However, most current metagenomic binning tools have been developed for short reads. The few tools that can process long reads either do not scale with increasing input size or require a database with reference genomes that are often unknown. In this article, we present MetaBCC-LR, a scalable reference-free binning method which clusters long reads directly based on their k-mer coverage histograms and oligonucleotide composition.We evaluate MetaBCC-LR on multiple simulated and real metagenomic long-read datasets with varying coverages and error rates. Our experiments demonstrate that MetaBCC-LR substantially outperforms state-of-the-art reference-free binning tools, achieving ∼13\\% improvement in F1-score and ∼30\\% improvement in ARI compared to the best previous tools. Moreover, we show that using MetaBCC-LR before long-read assembly helps to enhance the assembly quality while significantly reducing the assembly cost in terms of time and memory usage. The efficiency and accuracy of MetaBCC-LR pave the way for more effective long-read-based metagenomics analyses to support a wide range of applications.The source code is freely available at: https://github.com/anuradhawick/MetaBCC-LR.Supplementary data are available at Bioinformatics online.}",
        issn = {1367-4803},
        doi = {10.1093/bioinformatics/btaa441},
        url = {https://doi.org/10.1093/bioinformatics/btaa441},
        eprint = {https://academic.oup.com/bioinformatics/article-pdf/36/Supplement\_1/i3/33488763/btaa441.pdf},
    }

    Visit original content creator repository
    https://github.com/anuradhawick/seq2covvec

  • fitjunction

    fitjunction

    Fitjunction uses the Fitbit API to periodically query your activity data and store it in a local MySQL database for safe-keeping or further analysis.

    Motivation

    I’ve logged a lot of fitness data in the past years. Jogging, weight lifting, steps and heart rate to name a few. Some of that data is lost forever in services that have since been discontinued. To get the most out of my data collection efforts I’m in the process of creating a self-hosted centralized database of all quantified-self metrics that are of interest to me. Fitbit is the first step.

    Fitjunction will extract more detailed data from your Fitbit account than the built-in export function will. Whether you’d like to run more advanced analytics or you just want to have an offline backup fitjunction will give your data back to you. To that end the database structure has been kept close to the structure provided by Fitbit and the raw data for every single day queried from the Fitbit API is stored as a .json file.

    Fitbit API setup

    You’ll need to create your own Fitbit app for this but it only takes a few minutes. Go to https://dev.Fitbit.com/apps and create an app with the following settings:

    • OAuth 2.0 Application Type: Personal
    • Callback URL: The URL should lead to the machine where you’re running fitjunction, if your machine isn’t reachable from the internet you can get around this by entering a URL that doesn’t exist and changing that in your browser to http://localhost/ during authentication.
    • Default Access Type: Read-Only

    Docker deployment

    Requirements

    • A MySQL server
    • If you have multiple web applications running, a reverse proxy like nginx or traefik

    Deploying the container

    1. create directories for config and the result_history.
    2. Execute install/createdatabase.sql on your MySQL server to create the qsaggregator db and fill it with default values.
    3. Fill out install/config.sample.js and copy it to /config.js. Make sure the MySQL user you enter in the config has access to the qsaggregator database.
    4. Run the container with the previously created directories mounted into it.

    docker run -d -p 80:80 --name fitjunction \
    -v /opt/fitjunction/config:/fitjunction/config \
    -v /opt/fitjunction/result_history:/fitjunction/config \
    -v /etc/localtime:/etc/localtime:ro \
    fourandone/fitjunction
    

    Manual Installation

    1. Download this repository and run “npm install” in root directory.
    2. Execute install/createdatabase.sql on your MySQL server to create the qsaggregator db and fill it with default values.
    3. Fill out install/config.sample.js and copy it to /config/config.js. Make sure the MySQL user you enter in the config has access to the qsaggregator database.

    Usage

    1. Run “node main.js” in root directory.
    2. Got to http://localhost/?mode=auth to start authorization.
    3. fitjunction will periodically update the database. After a few hours or days it will have reached the current day and keep updating the current day as new data is added on the Fitbit website.

    Visit original content creator repository
    https://github.com/04nd01/fitjunction