Comments (9)
Maybe the YouTube RSS link won't be added to shorts
from glance.
an idea
extract the ytp-time-duration from html
Add function
package feed
import (
"fmt"
"io"
"net/http"
"regexp"
"strings"
"time"
)
// fetchVideoDurationFromHTML fetches the video duration by parsing the HTML of the video page.
func fetchVideoDurationFromHTML(videoURL string) (time.Duration, error) {
resp, err := http.Get(videoURL)
if err != nil {
return 0, err
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return 0, err
}
// Regular expression to find the duration in the HTML
re := regexp.MustCompile(`class="ytp-time-duration">(\d+):(\d+)</span>`)
matches := re.FindStringSubmatch(string(body))
if len(matches) < 3 {
return 0, fmt.Errorf("duration not found")
}
// Parse minutes and seconds
minutes, err := strconv.Atoi(matches[1])
if err != nil {
return 0, err
}
seconds, err := strconv.Atoi(matches[2])
if err != nil {
return 0, err
}
duration := time.Duration(minutes)*time.Minute + time.Duration(seconds)*time.Second
return duration, nil
}
Update FetchYoutubeChannelUploads
package feed
import (
"fmt"
"log/slog"
"net/http"
"net/url"
"strings"
"time"
)
type youtubeFeedResponseXml struct {
Channel string `xml:"title"`
ChannelLink struct {
Href string `xml:"href,attr"`
} `xml:"link"`
Videos []struct {
Title string `xml:"title"`
Published string `xml:"published"`
Link struct {
Href string `xml:"href,attr"`
} `xml:"link"`
Group struct {
Thumbnail struct {
Url string `xml:"url,attr"`
} `xml:"http://search.yahoo.com/mrss/ thumbnail"`
} `xml:"http://search.yahoo.com/mrss/ group"`
} `xml:"entry"`
}
func parseYoutubeFeedTime(t string) time.Time {
parsedTime, err := time.Parse("2006-01-02T15:04:05-07:00", t)
if err != nil {
return time.Now()
}
return parsedTime
}
func FetchYoutubeChannelUploads(channelIds []string, videoUrlTemplate string) (Videos, error) {
requests := make([]*http.Request, 0, len(channelIds))
for i := range channelIds {
request, _ := http.NewRequest("GET", "https://www.youtube.com/feeds/videos.xml?channel_id="+channelIds[i], nil)
requests = append(requests, request)
}
job := newJob(decodeXmlFromRequestTask[youtubeFeedResponseXml](defaultClient), requests).withWorkers(30)
responses, errs, err := workerPoolDo(job)
if err != nil {
return nil, fmt.Errorf("%w: %v", ErrNoContent, err)
}
videos := make(Videos, 0, len(channelIds)*15)
var failed int
for i := range responses {
if errs[i] != nil {
failed++
slog.Error("Failed to fetch youtube feed", "channel", channelIds[i], "error", errs[i])
continue
}
response := responses[i]
for j := range response.Videos {
video := &response.Videos[j]
videoURL := video.Link.Href
// Fetch video duration
duration, err := fetchVideoDurationFromHTML(videoURL)
if err != nil || duration <= 60*time.Second {
continue
}
// Skip shorts based on title and duration
if strings.Contains(video.Title, "#shorts") || duration <= 60*time.Second {
continue
}
if videoUrlTemplate != "" {
parsedUrl, err := url.Parse(videoURL)
if err == nil {
videoURL = strings.ReplaceAll(videoUrlTemplate, "{VIDEO-ID}", parsedUrl.Query().Get("v"))
} else {
videoURL = "#"
}
}
videos = append(videos, Video{
ThumbnailUrl: video.Group.Thumbnail.Url,
Title: video.Title,
Url: videoURL,
Author: response.Channel,
AuthorUrl: response.ChannelLink.Href + "/videos",
TimePosted: parseYoutubeFeedTime(video.Published),
})
}
}
if len(videos) == 0 {
return nil, ErrNoContent
}
videos.SortByNewest()
if failed > 0 {
return videos, fmt.Errorf("%w: missing videos from %d channels", ErrPartialContent, failed)
}
return videos, nil
}
Added a function fetchVideoDurationFromHTML to fetch video duration by parsing the HTML content of the video page.
Updated FetchYoutubeChannelUploads to filter out videos based on their duration and the existing title check.
from glance.
@bigsk1 How to deal with delays? The fetchVideodurationFromHtml function requires an additional network request, and there is also CPU consumption for regular parsing of HTML (850KB+)
from glance.
Perhaps leaving it to Chrome for processing would be a better choice,
please refer to: iframe_api
from glance.
Perhaps leaving it to Chrome for processing would be a better choice,
please refer to: iframe_api
Yes it's alot to check all html and parse. The ytp-time-duration isn't being shown anyway I found out when scraping,
What about this approach, need to see the size of requests
Fetches the video duration by making an HTTP request to the video page and extracting the duration from the embedded ytInitialPlayerResponse JSON object.
This approach efficiently filters out YouTube Shorts by checking the video duration using the embedded metadata in the HTML
package feed
import (
"fmt"
"io/ioutil"
"net/http"
"net/url"
"regexp"
"strconv"
"strings"
"time"
"log/slog"
)
type youtubeFeedResponseXml struct {
Channel string `xml:"title"`
ChannelLink struct {
Href string `xml:"href,attr"`
} `xml:"link"`
Videos []struct {
Title string `xml:"title"`
Published string `xml:"published"`
Link struct {
Href string `xml:"href,attr"`
} `xml:"link"`
Group struct {
Thumbnail struct {
Url string `xml:"url,attr"`
} `xml:"http://search.yahoo.com/mrss/ thumbnail"`
} `xml:"http://search.yahoo.com/mrss/ group"`
} `xml:"entry"`
}
func parseYoutubeFeedTime(t string) time.Time {
parsedTime, err := time.Parse("2006-01-02T15:04:05-07:00", t)
if err != nil {
return time.Now()
}
return parsedTime
}
// FetchVideoDuration fetches the duration of a YouTube video using the embedded metadata
func FetchVideoDuration(videoID string) (time.Duration, error) {
resp, err := http.Get("https://www.youtube.com/watch?v=" + videoID)
if err != nil {
return 0, err
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return 0, err
}
// Find the ytInitialPlayerResponse JSON object in the HTML
re := regexp.MustCompile(`"lengthSeconds":"(\d+)"`)
matches := re.FindStringSubmatch(string(body))
if len(matches) < 2 {
return 0, fmt.Errorf("duration not found")
}
seconds, err := strconv.Atoi(matches[1])
if err != nil {
return 0, err
}
return time.Duration(seconds) * time.Second, nil
}
func FetchYoutubeChannelUploads(channelIds []string, videoUrlTemplate string) (Videos, error) {
requests := make([]*http.Request, 0, len(channelIds))
for i := range channelIds {
request, _ := http.NewRequest("GET", "https://www.youtube.com/feeds/videos.xml?channel_id="+channelIds[i], nil)
requests = append(requests, request)
}
job := newJob(decodeXmlFromRequestTask[youtubeFeedResponseXml](defaultClient), requests).withWorkers(30)
responses, errs, err := workerPoolDo(job)
if err != nil {
return nil, fmt.Errorf("%w: %v", ErrNoContent, err)
}
videos := make(Videos, 0, len(channelIds)*15)
var failed int
for i := range responses {
if errs[i] != nil {
failed++
slog.Error("Failed to fetch youtube feed", "channel", channelIds[i], "error", errs[i])
continue
}
response := responses[i]
for j := range response.Videos {
video := &response.Videos[j]
// Extract video ID from URL
parsedUrl, err := url.Parse(video.Link.Href)
if err != nil {
slog.Error("Failed to parse video URL", "url", video.Link.Href, "error", err)
continue
}
videoID := parsedUrl.Query().Get("v")
if videoID == "" {
slog.Error("Failed to extract video ID from URL", "url", video.Link.Href)
continue
}
// Fetch video duration
duration, err := FetchVideoDuration(videoID)
if err != nil {
slog.Error("Failed to fetch video duration", "videoID", videoID, "error", err)
continue
}
// Skip shorts based on duration
if duration <= 60*time.Second {
continue
}
var videoUrl string
if videoUrlTemplate == "" {
videoUrl = video.Link.Href
} else {
videoUrl = strings.ReplaceAll(videoUrlTemplate, "{VIDEO-ID}", videoID)
}
videos = append(videos, Video{
ThumbnailUrl: video.Group.Thumbnail.Url,
Title: video.Title,
Url: videoUrl,
Author: response.Channel,
AuthorUrl: response.ChannelLink.Href + "/videos",
TimePosted: parseYoutubeFeedTime(video.Published),
})
}
}
if len(videos) == 0 {
return nil, ErrNoContent
}
videos.SortByNewest()
if failed > 0 {
return videos, fmt.Errorf("%w: missing videos from %d channels", ErrPartialContent, failed)
}
return videos, nil
}
from glance.
Also have an idea for a cloudflare worker to get the video length and do the heavy lifting
Cloudflare Worker: The worker fetches the HTML of a YouTube video page, extracts the duration, and returns it.
Glance application sends requests to the Cloudflare Worker for each video ID to get the duration and then filters out short videos.
The Cloudflare Worker script will fetch the HTML content of the YouTube video page and extract the duration from the ytInitialPlayerResponse object.
Worker Script
addEventListener('fetch', event => {
event.respondWith(handleRequest(event.request))
})
async function handleRequest(request) {
const url = new URL(request.url)
const videoId = url.searchParams.get('videoId')
if (!videoId) {
return new Response('Missing videoId parameter', { status: 400 })
}
try {
const response = await fetch(`https://www.youtube.com/watch?v=${videoId}`)
const html = await response.text()
// Extract video duration from HTML
const durationMatch = html.match(/"lengthSeconds":"(\d+)"/)
if (!durationMatch || durationMatch.length < 2) {
return new Response('Duration not found', { status: 404 })
}
const durationSeconds = parseInt(durationMatch[1], 10)
return new Response(JSON.stringify({ duration: durationSeconds }), {
headers: { 'Content-Type': 'application/json' }
})
} catch (error) {
return new Response('Error fetching video duration', { status: 500 })
}
}
youtube.go
package feed
import (
"encoding/json"
"fmt"
"net/http"
"net/url"
"strings"
"time"
"log/slog"
)
type youtubeFeedResponseXml struct {
Channel string `xml:"title"`
ChannelLink struct {
Href string `xml:"href,attr"`
} `xml:"link"`
Videos []struct {
Title string `xml:"title"`
Published string `xml:"published"`
Link struct {
Href string `xml:"href,attr"`
} `xml:"link"`
Group struct {
Thumbnail struct {
Url string `xml:"url,attr"`
} `xml:"http://search.yahoo.com/mrss/ thumbnail"`
} `xml:"http://search.yahoo.com/mrss/ group"`
} `xml:"entry"`
}
func parseYoutubeFeedTime(t string) time.Time {
parsedTime, err := time.Parse("2006-01-02T15:04:05-07:00", t)
if err != nil {
return time.Now()
}
return parsedTime
}
// FetchVideoDuration fetches the duration of a YouTube video using the Cloudflare Worker
func FetchVideoDuration(videoID string) (time.Duration, error) {
workerURL := fmt.Sprintf("https://YOUR_WORKER_SUBDOMAIN.workers.dev?videoId=%s", videoID)
resp, err := http.Get(workerURL)
if err != nil {
return 0, err
}
defer resp.Body.Close()
var result struct {
Duration int `json:"duration"`
}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return 0, err
}
return time.Duration(result.Duration) * time.Second, nil
}
func FetchYoutubeChannelUploads(channelIds []string, videoUrlTemplate string) (Videos, error) {
requests := make([]*http.Request, 0, len(channelIds))
for i := range channelIds {
request, _ := http.NewRequest("GET", "https://www.youtube.com/feeds/videos.xml?channel_id="+channelIds[i], nil)
requests = append(requests, request)
}
job := newJob(decodeXmlFromRequestTask[youtubeFeedResponseXml](defaultClient), requests).withWorkers(30)
responses, errs, err := workerPoolDo(job)
if err != nil {
return nil, fmt.Errorf("%w: %v", ErrNoContent, err)
}
videos := make(Videos, 0, len(channelIds)*15)
var failed int
for i := range responses {
if errs[i] != nil {
failed++
slog.Error("Failed to fetch youtube feed", "channel", channelIds[i], "error", errs[i])
continue
}
response := responses[i]
for j := range response.Videos {
video := &response.Videos[j]
// Extract video ID from URL
parsedUrl, err := url.Parse(video.Link.Href)
if err != nil {
slog.Error("Failed to parse video URL", "url", video.Link.Href, "error", err)
continue
}
videoID := parsedUrl.Query().Get("v")
if videoID == "" {
slog.Error("Failed to extract video ID from URL", "url", video.Link.Href)
continue
}
// Fetch video duration
duration, err := FetchVideoDuration(videoID)
if err != nil {
slog.Error("Failed to fetch video duration", "videoID", videoID, "error", err)
continue
}
// Skip shorts based on duration
if duration <= 60*time.Second {
continue
}
var videoUrl string
if videoUrlTemplate == "" {
videoUrl = video.Link.Href
} else {
videoUrl = strings.ReplaceAll(videoUrlTemplate, "{VIDEO-ID}", videoID)
}
videos = append(videos, Video{
ThumbnailUrl: video.Group.Thumbnail.Url,
Title: video.Title,
Url: videoUrl,
Author: response.Channel,
AuthorUrl: response.ChannelLink.Href + "/videos",
TimePosted: parseYoutubeFeedTime(video.Published),
})
}
}
if len(videos) == 0 {
return nil, ErrNoContent
}
videos.SortByNewest()
if failed > 0 {
return videos, fmt.Errorf("%w: missing videos from %d channels", ErrPartialContent, failed)
}
return videos, nil
}
from glance.
Hey,
This is something that I've been annoyed by as well ever since I added the videos widget. I don't know of a reasonable way to solve this problem that doesn't involve using YouTube's API.
Having to make an extra request for every single video is extremely inefficient and would either result in timeouts, slowed page loads or hitting rate limits. I have 37 channels added to one of my videos widgets, at 15 videos per feed that's 555 extra requests. Those requests would prevent the entire page from loading until they're done. I'm sure there's plenty of people with widgets that have more than 37 channels in them which would exacerbate the issue even further.
Maybe someone can chime in with a different approach to tackling this problem.
from glance.
I wonder if public or local invidious instances can be used to get videos as an alternative and have a way to directly sort video types
- https://docs.invidious.io/search-filters/
- https://github.com/iv-org/documentation/blob/master/docs/api/common_types.md
from glance.
I would also like to share my support for adding a way to block shorts please.
from glance.
Related Issues (20)
- [Request] Custom iFrame Title HOT 2
- [Feature Request] List of configurable links HOT 2
- FEATURE - Add docker images new releases widget HOT 2
- Feature - Add Trending Github Repositories HOT 2
- [FEATURE] Serving behind reverse proxy HOT 1
- [Bug] Piped feed don't show thumbnails HOT 3
- [Request] Swipe function on the smartphone HOT 1
- Alternate lobsters HOT 5
- [BUG] Search field text input is always white HOT 1
- [Request] quick-launch function HOT 1
- RSS Text overflowing HOT 2
- Feature Request - Docker Label Support HOT 4
- Feature Request: Multi-client ability HOT 1
- Iframe not loading HOT 5
- [FEATURE REQUEST] Ability to Edit CSS HOT 4
- Change Search bar behaviour (ENTER vs. CTRL + ENTER) HOT 1
- Feature Request: Monitor Site Property Timeout & Retry HOT 2
- Alternate RSS Feed Titles HOT 2
- [Request] Custom CSS Class as a property in widgets HOT 1
- [Request] Youtube filter for playlists HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from glance.