Nerve Center

Q&A Thread: For simple questions that don't need their own thread

Here you can ask questions so that the board is not clogged with small threads.

Old thread >>9327

Suggestions


Drag and drop windows with tag rules. Show two windows side by side and one window can be programmed with the rule "ADD tag foo" and the other one has the rule "REMOVE tag foo, ADD tag bar" and you can drag and drop files to them.

Deriving tags from regex of other tags/namespace tags. A file has the tag "filename:big_ugly_name" and we could regex that namespace for another tag.

Tag sets with hotkeys: save a set of tags under a hotkey so it's quick to add them to a file while filtering

Opaque window behind tag list in the corner so it doesn't get hidden by picture background

Option to default certain mime types to be excluded from slideshow and only open externally, will help with videos with odd codecs that don't preview in the slideshow correctly

Option to specify hamming distance in "find similar images", you can't change the option once it's in the filter window and you have to enter the hash manually in the "system:similar to" option

Site Request

I'm a new user to Hydrus and I'm kind of confused about how you create the images to add a new downloader. If they can, can someone either tell me how to make it or even better post an image for it. The website I'm requesting is "https://booru.allthefallen.moe/". Call me what you want.

Version 399

windows

zip: https://github.com/hydrusnetwork/hydrus/releases/download/v399/Hydrus.Network.399.-.Windows.-.Extract.only.zip

exe: https://github.com/hydrusnetwork/hydrus/releases/download/v399/Hydrus.Network.399.-.Windows.-.Installer.exe

macOS

app: https://github.com/hydrusnetwork/hydrus/releases/download/v399/Hydrus.Network.399.-.macOS.-.App.dmg

linux

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v399/Hydrus.Network.399.-.Linux.-.Executable.tar.gz

source

tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v399.tar.gz

I had a great week tidying up smaller issues before my vacation.

all small items this week

You can now clear a file's 'viewing stats' back to zero from their right-click menus. I expect to add an edit panel here in future. Also, I fixed an issue where duplicate filters were still counting viewing time even when set in the options not to.

When I plugged the new shortcuts system's mouse code into the media viewer last week, it accidentally worked too well–even clicks were being propagated from the hover windows to the media viewer! This meant that simple hover window clicks were triggering filter actions. It is fixed, and now only keyboard shortcuts will propagate. There are also some mouse wheel propagation fixes here, so if you wheel over the taglist, it shouldn't send a wheel (i.e. previous/next media) event up once you hit the end of the list, but if you wheel over some hover window greyspace, it should.

File delete and undelete are now completely plugged into the shortcut system, with the formerly hardcoded delete key and shift+delete key moved to the 'media' shortcut set by default. Same for the media viewer's zoom_in and zoom_out and ctrl+mouse wheel, under the 'media viewer - all' set. Feel free to remap them.

The new tag autocomplete options under services->tag display and search now allow you to also search namespaces with a flat 'namespace:', no asterisk. The logic here is improved as well, with the 'ser'->'series:metroid' search type automatically assuming the 'namespace:' and 'namespace:*' options, with the checkboxes updating each other.

I fixed an issue created by the recent page layout improvements where the first page of a session load would have a preview window about twenty pixels too tall, which for some users' workflows was leading to slowly growing preview windows as they normally used and restarted the program. A related issue with pages nested inside 'page of pages' having too-short preview windows is also fixed. This issue may happen once more, but after one more restart, the client will fix the relevant option here.

If you have had some normal-looking files fail to import, with 'malformed' as the reason, but turning off the decompression bomb check allowed them, this issue is now fixed. The decomp bomb test was itself throwing an error in this case, which is now caught and ignored. I have also made the decomp bomb test more lax, and default off for new users–this thing has always caught more false positives than true, so I am now making it more an option for users who need it due to memory limitations than a safeguard for all.

advanced parsing changes

The HTML and JSON parsing formulae can now do negative indexing. So, if you need to select the '2nd <a> tag from the end of the list', you can now set -2 as the index to select. Also, the JSON formula can now index on JSON Objects (the key->value dictionaries), although due to technical limitations the list of keys is sorted before indexing, rather than selecting the data as-is in the JSON document.

Furthermore, JSON formulae that are set to get strings no longer pull a 'null' value as the (python) string 'None'. These entries are now ignored.

I fixed an annoying issue when hitting ok on 'fixed string' String Matches. When I made the widgets hide and not overwrite the 'example string' input last week, I forgot to update the ok validation code. This is now fixed.

full list

- improvements:

- the media viewer and thumbnail _right-click->manage_ menus now have a _viewing stats->clear_ action, which does a straight-up delete of all viewing stats record for the selected files. 'edit' will be added to this menu in future

- extended the tag autocomplete options with a checkbox to allow 'namespace:' to match all tags, without the explicit asterisk

- tag autocomplete options now permit namespace searches if the 'search namespaces into full tags' option is set

- the tag autocomplete options panel now disables and checks the namespace checkboxes when one option overrules another

- cleaned up some tag search logic to recognise and deal with 'namespace:' as a query

- added some more unit tests for tag autocomplete options

- the html and json parsing formulae now support negative indexing, to select the nth last item from a list

- extended the '1 -> "1st"' ordinal string conversion code to deal with negative indices

- the 'hide tag' taglist menu actions are now wrapped in yes/no dialogs

- reduced the activation-to-click-accept time that the shortcuts handler uses to ignore activating clicks from 100ms to 17ms

- clicking the media viewer's top hover window's zoom buttons now forces the 'media viewer center' zoom centerpoint, so if you have the mouse centerpoint set, it won't zoom around the button where you are clicking!

- added a simple 8chan.moe watcher to the defaults, all users will get it on update

- the default bandwidth rules for download pages, subs, and watchers are now more liberal. only new users will get these. various improvements to db and ui update pipeline mean the enforced breaks are less needed

- when a manage tags dialog moves to another media, if it has a 'recent tags' suggestion list with a selection, the selection now resets to the top item in the list

- the mpv player now tracks when a video is fully loaded and only reports seek bar info and allows seeks when this is so (this should fix some seekbar errors on broken/slow-loading vids)

- added 'undelete_file' to media shortcut commands

- file delete and undelete are no longer hardcoded in the media viewer and media thumbnail grid. these actions are now handled entirely in the media shortcut set, and added to all clients by default (this defaults to (shift +) delete key, and also backspace on macos, so likely no changes)

- ctrl+mouse wheel is no longer hardcoded to zoom in the media browser. these actions are now handled entirely in the 'all' media viewer shortcut set (this defaults to ctrl+wheel or +/-, so likely no changes)

- deleted some old shortcut processing code

- tightened up some update timers to better halt work while the client is minimised to system tray. this _may_ improve some users' restore hanging issues

- as Qt is happier than wx about making pages on a non-visible client, subscriptions and various url import operations are now permitted to create pages while the client is minimised to taskbar or system tray. if this applies to your situation, please let me know how you get on here, as this may relieve some restore hanging as the pending new-file jobs are no longer queued up

- .

- fixes:

- clicks on hover window greyspace should no longer propagate up to the media viewer. this was causing weird archive/delete filter actions

- mouse scroll on hover window taglist should no longer propagate up to the media viewer when the taglist has no more to scroll in that direction

- fixed an issue that meant preview windows were initialising about twenty pixels too short for the first page loaded in a session, and also pages created within nested page of pages. also cleaned up some logic for unusual situations like hidden preview windows. one more cycle of closing and reopening the client will fix the option value here

- cleaned and unified some page sash setting code, also improving the 'hide preview window' option reliability for advanced actions

- fixed a bug that meant file viewtime was still being recorded on the duplicate filter when the special exception option was off

- reduced some file viewtime manager overhead

- fixed an issue with database repair code when local_tags_cache is missing

- fixed an issue updating a very old db not recognising that local_tags_cache does not yet exist for proper reason and then trying to repair it before update code runs

- fixed the annoying issue introduced in the recent string match overhaul where a 'fixed character' string match edit panel would not want to ok if the (now hidden) example string input did not have the same fixed char data. it now validates no matter what is in the hidden input

- potentially important parsing fix: JSON parsing, when set to get strings, no longer converts a 'null' value to 'None'

- the JSON parsing formula now allows you to select the nth indexed item of an Object (a JSON key->value dictionary). due to technical limitations, it alphabetises the keys, not selecting them as-is in the JSON itself

- images that do not load in PIL no longer cause mime exceptions if they are run through the decompression bomb check

- .

- misc:

- boosted the values of the decompression bomb check anyway, to reduce false positives. it generally now has a problem with images with a bmp > 1GB memory

- by default, new file import options now start with decompression bombs allowed. this option is being reduced to a stopgap for users with less memory

- 'MimeException' is renamed to 'UnsupportedFileException'

- added 'DamagedOrUnusualFileException' to handle normally supported files that cannot be parsed or loaded

- 'SizeException' is split into 'TagSizeException' and 'FileSizeException'

- improved some file exception inheritance

- removed the 'experimental' label from sub-gallery page url type in parsing system

- updated some advanced help regarding bad files

- misc help updates

- updated cloudscraper to 1.2.40

next week

I am taking next week off. Normally I'd be shitposting E3, but instead I think I am going to finally get around to listening to the Ring Cycle through and giving Kingdom Come - Deliverance a go.

v400 will therefore be on the 10th of June. I hope to have the final part of the subscription data overhaul done, which will mean subscriptions load in less than a second, reducing how much data it needs to read and write and ultimately be more accessible for the Client API and things like right-click->add this query to subscription "blahbooru artists".

Thanks everyone!

Version 301

windows

zip: https://github.com/hydrusnetwork/hydrus/releases/download/v301/Hydrus.Network.301.-.Windows.-.Extract.only.zip

exe: https://github.com/hydrusnetwork/hydrus/releases/download/v301/Hydrus.Network.301.-.Windows.-.Installer.exe

os x

app: https://github.com/hydrusnetwork/hydrus/releases/download/v301/Hydrus.Network.301.-.OS.X.-.App.dmg

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v301/Hydrus.Network.301.-.OS.X.-.Extract.only.tar.gz

linux

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v301/Hydrus.Network.301.-.Linux.-.Executable.tar.gz

source

tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v301.tar.gz

I had a difficult week due to a bunch of IRL stress, but I got some good hydrus work done. The page of images downloader is now on the new parsing system, and I have prototyped a new way to 'gather' certain pages together.

simple downloader

The 'page of images downloader' is now the 'simple downloader'. It uses the new parsing system–which for very advanced users means that it uses parsing formulae–and so can find files from pages in much more flexible ways. At the moment, this means a dropdown with different parsers–you select the parser you want, paste some URLs in, and it should queue them up and fetch files all ok.

To get us started, I have written some basic parsers for it that can handle 4chan threads, 8chan threads (including 3-year-old threads that have some broken links on the new thread watcher), gfycat mp4s and webms, imgur still images and mp4s, and twitter images. I expect to write more parsers here myself, and I expect some other users will write some as well. It supports JSON and well as HTML parsing. I also want to write some more ui to make it easier to import and export new parsers.

Note: The new simple downloader cannot yet do the old 'get the destination of image links' parse rule the old downloader could. If this is important to you, please hold off updating for a week–I hope to have it in for v302.

Please give this a go and let me know how it works for you and if any of my new presets fail in any situations. I am really pleased with how simple yet powerful this can be, and I look forward to deploying more of this new parsing stuff as I move on to overhauling galleries.

gathering pages

Right-clicking on a page of pages now gives you a new 'gather' option. This is intended to 'gather' all the pages of a certain state across your whole session and then line them up inside that page of pages. To begin with, this only allows gathering of dead/404 thread watchers, but it seems to work well.

There is obviously more that I can do here, so again please give it a go and let me know what you think. Gathering 'finished' downloader pages sounds like a sensible next step.

sankaku complex bandwidth

Sankaku Complex contacted me this week to report that they have recently been running into bandwidth problems, particularly with scrapers and other downloaders like hydrus. They were respectful in reaching out to me and I am sympathetic to their problem. After some discussion, rather than removing hydrus support for Sankaku entirely, I am in this version adding a new restrictive default bandwidth rule for the sankakucomplex.com domain of 64MB/day.

If you are a heavy Sankaku user, please bear with this limit until we can figure out some better solutions. If there is an easy way to move a subscription to another source or slow down some larger queues you have piled up, I am sure they would appreciate it a lot. I am told they plan to update their API to allow more intelligent program access in future, and while they have no way to donate right now to help with bandwidth costs, they also hope to roll out a subscription service in the coming months.

On the hydrus end, I have decided to fold some kind of donation-link ui into the ongoing downloader overhaul, something like a "Here is how to support this source: (LINK)" to highlight donation pages or "Hey, please keep it to <XMB a day, thank you" wiki pages for those users who wish to help the sites (and are also able to!). I also hope to get some better 'veto' options working in the new gallery downloaders so we can avoid downloading large gifs and other garbage that fits tag censorship lists and so on in the first place. Also, as Known URLs are handled in more intelligent ways in the client, it will soon make sense to create a Public URL Repo, at which point we'll be able to cut out a huge number of duplicate downloads and spread the bandwidth burden about just by sharing hash-URL mappings with each other. Not to mention the eventual nirvana when we can just have clients peer-to-peering each other directly.

What we are doing with hydrus is all new stuff, and I am often ignorant myself until I hear new perspectives on workflow or whatever, so please let me know what you think about this stuff. I am keen to find ways that we can continue accessing sites for files and tags and other metadata without falling into it being a niusance for others. And to figure out what actually are practical and reasonable ongoing bandwidth rules for different situations.

misc

I fixed tag parents! I apologise for the inconvenience–when I optimised their load speed last week, I fucked it up and ended up loading them in the wrong way so they wouldn't display right.

The new system:known_url should load much faster in almost all situations.

There is a new 'subscription report mode' under help->debug->report modes. If you have subs that inexplicably aren't running, please give this a go and send me a clip from all the stuff it will print to your log.

full list

- after discussions with Sankaku Complex about their recent bandwidth problems, added a new 64MB/day default bandwidth rule for sankakucomplex.com–please check the release post for more information

- the 'page of images downloader' is now called the 'simple downloader' that uses the new parsing system (particularly, a single formula to parse urls)

- the simple downloader supports multiple named parsers–currently defaulting to: html 4chan and 8chan threads, all images, gfycat mp4, gfycat webm, imgur image, imgur video, and twitter images (which fetches the :orig and also works on galleries!)

- there is some basic editing of these parsing formulae, but it isn't pretty or easy to import/export yet

- the new parsing test panel now has a 'link' button that lets you fetch test data straight from a URL

- added a 'gather to this page of pages->dead thread watchers' menu to the page of pages right-click menu–it searches for all 404/DEAD thread watchers in the current page structure and puts them in the clicked page of pages!

- cleaned up some page tab right-click menu layout and order

- fixed tag parents, which I previously broke while optimising their load time fugg

- the new favourites list now presents parents in 'write' tag contexts, like manage tags–see if you like it (maybe this is better if hidden?)

- sped up known_url searches for most situations

- fixed an unusual error when drag-and-dropping a focused collection thumbnail to a new page

- fixed a problem that was marking collected thumbnails' media as not eligible for the archive/delete filter

- wrote a 'subscription report mode' that will say some things about subscriptions and their internal test states as they try (and potentially fail) to run

- if a subscription query fails to find any files on its first sync, it will give a better text popup notification

- if a subscription query finds files in its initial sync but does not have bandwidth to download them, a FYI text popup notification will explain what happened and how to review estimated wait time

- delete key now deletes from file import status lists

- default downloader tag import options will now inherit the fetch_tags_even_if_url_known_and_file_already_in_db value more reliably from 'parent' default options objects (like 'general boorus'->'specific booru')

- the db maintenance routine 'clear file orphans' will now move files to a chosen location as it finds them (previously, it waited until the end of the search to do the move). if the user chooses to delete, this will still be put off until the end of the search (so a mid-search cancel event in this case remains harmless)

- the migrate database panel should now launch ok even if a location does not exist (it will also notify you about this)

- brushed up some help (and updated a screenshot) about tag import options

- fixed a problem that stopped some old manage parsing scripts ui (to content links) from opening correctly

- improved some parsing test code so it can't hang the client on certain network problems

- misc ui code updates

- misc refactoring

next week

I am spinning a lot of plates right now, but I also have a bit of spare time next week. I hope to catch up on my ongoing misc todo and also polish some of the new stuff that has come out recently. I also want to put some time into the gallery overhaul–maybe prepping for the ability to drag and drop arbitrary URLs onto the client.

Version 398

windows

zip: https://github.com/hydrusnetwork/hydrus/releases/download/v398/Hydrus.Network.398.-.Windows.-.Extract.only.zip

exe: https://github.com/hydrusnetwork/hydrus/releases/download/v398/Hydrus.Network.398.-.Windows.-.Installer.exe

macOS

app: https://github.com/hydrusnetwork/hydrus/releases/download/v398/Hydrus.Network.398.-.macOS.-.App.dmg

linux

tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v398/Hydrus.Network.398.-.Linux.-.Executable.tar.gz

source

tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v398.tar.gz

I had a good work week. Tag autocomplete gets some new search options, and advanced users who make downloaders get some new text processing tools.

tag autocomplete

When I recently overhauled the tag autocomplete pipeline, I eliminated some unusual logical hoops where you could accidentally fire off expensive searches that would fetch all tags. Now the code is clean, I am adding them back in as real options.

The main thing here is that services->tag display is now services->tag display and search. It has several new options to change search based on what the autocomplete's current 'tag domain' is (i.e. what the button on the dropdown says, "all known tags" or "my tags" or whatever else). The options are available for every specific tag domain and the "all known tags" domain, and only apply there.

There are three new search options: You can have full namespace lookup, so an input of 'ser' also finds 'series:metroid' and all other series tags; you can have an explicit input of 'series:*' show all 'series' tags; and you can have '*' show all tags. These queries are extremely expensive for a large service like the public tag repository (they could take minutes to complete, and eat a ton of memory and CPU), but they may be appropriate for a smaller domain like "my tags". Please feel free to play with them.

There are also a couple of clever options setting how 'write' autocompletes (the ones that add tags, like in the manage tags dialog) start up, based on the tag service of the page they are on. You can set them to start with a different file or tag domain. Most users will be happy with the defaults, which is to stick with the current tag domain and "all known files", but if you want to change that (e.g. some users like to get suggestions for "my tags" from the PTR, or they don't want tag counts from files not in "my files"), you now can. The old option under options->tags that did the "all known files" replacement for all write autocompletes is now removed.

I have optimised the database autocomplete search code to work better with '*' 'get everything' queries. In the right situation, these searches can be very fast. This logic is new, the first time I have supported it properly, so let me know if you discover any bugs.

string processing

This is only important for advanced users who write downloaders atm. It will come to the filename tagging panel in future.

I am plugging the new String Processor today into all parsing formulae. Instead of the old double-buttons of String Match and String Converter, these are now merged into one button that can have any combination of ordered Matches and Converters, so if you want to filter after you convert, this is now easy. There is new UI to manage this and test string processing at every step.

The String Processor also provides the new String Splitter object, which takes a single string like '1,2,3' and lets you split it by something like ',' to create three strings [ '1', '2', '3' ]. So, if your HTML or JSON parsing provides you with a line with multiple things to parse, you should now be able to split, convert, and match it all, even if it is awkward, without voodoo regex hackery.

I also did some background work on improving how the parsing example/test data is propagated to different panels, and several bugs and missed connections are fixed. I will keep working here, with the ideal being that every test panel shows multiple test data, so if you are parsing fifty URLs, a String Processor working on them will show how all fifty are being converted, rather than the current system of typically just showing the first. After that, I will get to work on supporting proper multiline parsing so we can parse notes.

the rest

Double-clicking a page tab now lets you rename it!

system:time imported has some quick buttons for 'since 1/7/30 days ago'.

I cleaned out the last of the behind-the-scenes mouse shortcut hackery from the media viewer. Everything there now works on the new shortcuts system. There aren't many front-end changes here, but a neat thing is that clicking to focus an unfocused media window no longer activates the shortcut for that click! So, if you have an archive/delete filter, feel free to left-click it to activate it–it won't 'keep and move on' on that first click any more. I will continue to push on shortcuts in normal weekly work, adding mouse support to more things and adding more command types.

You can now enter percent-encoded characters into downloader queries. A couple of sites out there have tags with spaces, like '#simple background', which would normally be broken in hydrus into two tags [ '#simple', 'background' ]. You can now search for this with '#simple%20background' or '%23simple%20background'. Generally, if you are copy/pasting any percent-encoded query, it should now work in hydrus. The only proviso here is %25, which actually is %. If you paste this, it may work or not, all bets are off.

I am rolling out updated Gelbooru and Newgrounds parsers this week. Gelbooru searching should work again, and Newgrounds should now get static image art.

full list

- new tag search options:

- there are several new options for tag autocomplete under the newly renamed _services->tag display and search_:

- for 'manage tags'-style 'write' autocompletes, you can now set which file service and tag service each tag service page's autocomplete starts with (e.g. some users have wanted to say 'start my "my tags" service looking at "all known files" and "ptr"' to get more suggestions for "my tags" typing). the default is 'all known files' and the same tag service

- the old blanket 'show "all known files" in write autocompletes' option under _options->tags_ is removed

- you now can enable the following potentially very slow and expensive searches on a per-tag-domain basis:

- - you can permit namespace-autocompleting searches, so 'ser' also matches 'ser*:*', i.e. 'series:metroid' and every other series tag

- - you can permit 'namespace:*', fetching all tags for a namespace

- - you can permit '*', fetching all tags (╬ಠ益ಠ)

- '*' and 'namespace:*' wildcard searches are now significantly faster on smaller specific tag domains (i.e. not "all known tags")

- short explicit wildcard searches like "s*" now fire off that actual search, regardless of the 'exact match' character threshold

- queries in the form "*:xxx" are now replaced with "xxx" in logic and display

- improved the reliability of various search text definition logic to account for wildcard situations properly when doing quick-enter tag broadcast and so on

- fixed up autocomplete db search code for wildcard namespaces with "*" subtags

- simplified some autocomplete database search code

- .

- string processing:

- the new string processor is now live. all parsing formulae now use a string processor instead of the string match/transformer pair, with existing matches and transformers that do work being integrated into the new processor

- thus, all formulae parsing now supports the new string splitter object, which allows you to split '1,2,3' into ['1','2','3']

- all formulae panels now have the combined 'string processing' button, which launches a new edit panel and will grow in height to list all current processing steps

- the stringmatch panel now hides its controls when they are not relevent to the current match type. also, setting fixed match type (or, typically, mouse-scrolling past it), no longer resets min/max/example fields)

- the string conversion step edit panel now clearly separates the controls vs the test results

- improved button and summary labelling for string tools across the program

- some differences in labelling between string 'conversion' and 'transformation' are unified to 'conversion' across the program

- moved the test data used in parsing edit panels to its own object, and updated some of the handling to support passing up of multiple example texts

- the separation formula of a subsidiary page parser now loads with current test data

- the string processing panel loads with the current test data, and passes the first example string of the appropriate processing step to its sub-panels. this will be expanded in future to multiple example testing for each panel, and subsequently for note parsing, multiline testing

- added safety code and unit tests to test string processing for hex/base64 bytes outcomes. as a reminder, I expect to eliminate the bytes issue in future and just eat hashes as hex

- cleaned up a variety of string processing code

- misc improvements to string processing controls

- .

- the rest:

- double-clicking a page tab now opens up the rename dialog

- system:time imported now has quick buttons for 'since 1/7/30 days ago'

- all hydrus downloaders now accept percent-encoded characters in the query field, so if you are on a site that has tags with spaces, you can now enter a query like "simple%20background red%20hair" to get the input you want. you can also generally now paste encoded queries from your address bar into hydrus and they should work, with the only proviso being "%25", which is "%", when all bets are off

- duplicates shut down work (both tree rebalancing and dupe searching) now quickly obeys the 'cancel shutdown work' splash button

- fixed a signal cleanup bug that meant some media windows in the preview viewer were hanging on to and multiplying a 'launch media' signal and a shortcut handler, which meant double-clicking on the preview viewer successively on a page would result in multiple media window launches

- fixed an issue opening the manage parsers dialog for users with certain unusual parsers

- fixed the 'hide the preview window' setting for the new page layout method

- updated the default gelbooru gallery page parser to fix gelb gallery parsing

- updated the newgrounds parser to the latest on the github. it should support static image art now

- if automatic vacuum is disabled in the client, forced vacuum is no longer prohibited

- updated cloudscraper for all builds to 1.2.38

- .

- boring code cleanup:

- all final mouse event processing hackey is removed from the media viewers, and the shortcut system is now fully responsible. left click (now with no or any modifier) is still hardcoded to do drag but does not interfere with other mapped left-click actions

- the duplicates filter no longer hardcodes mouse wheel to navigate–whatever is set for the normal browser, it now obeys

- cleaned up some mouse move tracking code

- clicking to focus an unfocused media viewer window will now not trigger the associated click action, so you can now click on archive/delete filters without moving on!

- the red/green on/off buttons on the autocomplete dropdown are updated from the old wx pubsub to Qt signalling

- updated wx hacks to proper Qt event processing for splash window, mouse move events in the media viewer and the animation scanbar

- cleaned up how some event filtering and other processing propagates in the media viewer

- deleted some old unused mouse show/hide media viewer code

- did some more python imports cleanup

- cleaned up some unit test selection code

- refactored the media code to a new directory module

- refactored the media result and media result cache code to their own files

- refactored some qt colour functions from core to gui module

- misc code cleanup

next week

I will be taking my week vacation after next week, and I don't want to accidentally create any big problems for the break, so I will try to mostly do small cleanup work and bug fixes.