Jump to content

hbarnetworks

Members
  • Posts

    131
  • Joined

  • Days Won

    14

hbarnetworks last won the day on April 30

hbarnetworks had the most liked content!

2 Followers

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

hbarnetworks's Achievements

Enthusiast

Enthusiast (6/14)

  • Dedicated
  • Conversation Starter
  • Reacting Well
  • First Post
  • Collaborator Rare

Recent Badges

42

Reputation

  1. I myself do this stuff daily aswell in order to scale multiple applications. Doesn't matter what it is, there is always a way. But with the way caching/uploading of content works in KVS its a bit of a challenge and its not for the average person. I am not against the way they do it. Their way of caching works good
  2. Fastest route would be to just SSH into the server and run a RSYNC between the servers if both ends have 10gbit it can go very quickly. KVS support note: This is true, but it is not related to the initial question, which seems to ask about migrating content from other script to KVS, rather than migrating videos from 1 KVS server to another server. When migrating data from another script, the data is not in the same structure as KVS stores, so all videos are required to pass through KVS conversion engine anyway to be uploaded into the needed structure.
  3. nginx can easily stream 5000 users on 3-4 cores. disk speed is usually the biggest factor. if the disk configuration cant run 10gbps there is no point in the setup. 500 bucks sounds good. But be aware for 1000-1500 you can rent a server that can actually handle 10gbit with SSD drives. So its up to you to do the math here.
  4. be aware that 4x18tb hdd's can never reach 10gbps specially if raid is involved. 4x18tb even with the fastest raid with redudancy, which is raid 10 speed is 2-3gb/s max. (with the best conditions) 5000 concurrent users with 3mb/s equals 10-15gbit. Sending 5000 users to the streaming server will saturate IO/bandwith and cause it to slow down to 1-2gb/s. This configuration will not work thats also why its only 500 bucks. This is too cheap for 10gbit connection. Decent server goes for 1k+
  5. Well I now use a janky solution called sshfs 😅. It functions as expected. But the more load gets added the more problematic it gets, its not designed for extreme loads. The main reason why /tmp /content need to be shared is as follows. tmp folder will be the upload folder where members upload. If it uploads to a different server which the main server cant access it will fail the upload because KVS cant find the file. (this is solvable with an NGINX rule if needed). Content folder gets too big to constantly rsync between servers and is a waste of storage space. So sharing this with other servers will remove this. KVS caching is very well designed so would be a shame to get rid of that.
  6. FyI I tried multiple clustering solutions but it seems the sheer amount of cache files it generates always overwhelms any software thats being used. Is it cephfs or glusterfs or drbd. A distributed file system does seem to work though when it comes to KVS so far I have found no issues. My next attempt would be to mount the cache folders to a local drive. and run KVS on a cephfs cluster and see how that performs. The only problem that I can already foresee is that the cron would only clean caches on 1 server and not on the other ones, so this would require a seperate cronjob on the other servers to clean it once in awhile. Not extreme but have to keep it in mind. In theory this seems very simple solution and would make scaling out alot easier. (atleast in theory)
  7. Just use a dedicated server for memcached and pump it with RAM. Thats pretty much the best solution to reduce SQL load.
  8. it definitely functions. Tested with a 1gbit upstream and 4 servers with 1gbit servers. It eventually reaches a point where the origin isn't doing anything anymore. So based on this caching with the existing configuration is possible.
  9. Well it definitely seems to work. I am going to attempt this on a bigger scale with a 1gbit origin and a 10x1gbit cache with multiple servers just to see how it holds up. And I will write a little guide on how to do it. Since in hindsight its pretty easy to setup. The beauty about this is that you can also use multiple upstream in your config to pull from different origin servers this way you cant really have a failure.
  10. Well I made it alot more difficult than it should have been. I just modified the remote_control.php file to use a remote time.php file this way I can ignore the cache on the time. And it seems to function. I will see if this actually works properly over the coming days.
  11. Well maybe I didn't explain it correctly. This is the config I use on NGINX. The problem is that it will never go to the .mp4 location because its served as a PHP file. If I remove the mp4 location and put it in the php location it will also cache the php file (remote_control.php). Meaning that the main server will grab a cached file and eventually stops working because of the time offset will be wrong after like 5 minutes. location ~ \.php$ { include fastcgi_params; fastcgi_pass unix:/var/run/php/php8.3-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include snippets/fastcgi-php.conf; add_header 'Cache-Control' 'no-store, no-cache, max-age=0'; add_header X-Cache-Status $upstream_cache_status; proxy_pass http://upstream; location ~ \.mp4$ { slice 4m; proxy_cache cache; proxy_cache_valid 200 206 3d; proxy_cache_key $scheme$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_http_version 1.1; limit_rate 2m; limit_rate_after 2m; } Maybe there is a way to just cache the arguments but I have never done that before so maybe someone on this forum has a clue?
  12. well there are 2 issues one can possible be resolved through a header. But the other one pretty much cant. The problem is that remote_control.php also gets cached and I cant exclude it either because then it wont cache. This causes it go get a time issue since it also caches the echo time response.
  13. Not sure if I can bring this up but just incase. I have been attempting to get edge servers working with the current implementation by just using the proxy_pass / slice module in NGINX. It does work. But since the video file gets served using a php file (remote_control.php) the etag obviously is never the same so caching is a video file is useless. Obviously to bypass this is to just use direct links but that kinda defeats the purpose. Tried multiple ways to let the backend think its possible. But then I have to code it myself which would kill future updates. So my conclusion is that its not possible using the existing kvs setup. With HLS you will not need the slice module since its already in chunks. But if the remote_control.php file is still being used, then its still not possible. But just to future proof my implementation. Do you guys have any information on how the implementation would go? Would it still use the remote_control file? Or is it going to run with like an simple encryption key? If its the first then I would need to rethink my strategy and make my own expiration solution (or just get rid of it). if its the latter than I would just need to wait. Thanks in advance!
  14. Mate run the video files on a separate server not on your main servers. Whatever you are trying to do, don't cache video files in an NGINX buffer that is never going to work.
  15. why php7.2?? there is no reason to use that
×
×
  • Create New...