python311-charset-normalizer-3.1.0-150400.9.7.2<>,ff0p9|_MEK&Fŋ $F (7Jx8ͩʒɕߐ60ֶ0*9mq=Xx Vvǔxk5ĎUfWݷ'Ş)vTk I+,[}ktVqte͔bX/H.e9V$̉!Tk۳oYQ*ζyX:U#kD9-', /oPhxb!1+^  nT2w^zptHE\X?\Hd% 2 Tx| &T989 9 9 9 9 999.d9Hh  S (89:>L@LFLGL9HM9IN9XNYN\N9]O9^SbVcW4dWeWfWlWuW9vXwYp9xZT9y[8%z[[[[[[\\DCpython311-charset-normalizer3.1.0150400.9.7.2Python Universal Charset detectorPython Universal Charset detector.f0h04-ch2aSUSE Linux Enterprise 15SUSE LLC MIThttps://www.suse.com/Unspecifiedhttps://github.com/ousret/charset_normalizerlinuxnoarch# python311_install_alternative: update-alternatives --quiet --install /usr/bin/normalizer normalizer /usr/bin/normalizer-3.11 311# python311_uninstall_alternative: if [ ! -e "/usr/bin/normalizer-3.11" ]; then update-alternatives --quiet --remove "normalizer" "/usr/bin/normalizer-3.11" fixL IvIv?)?)pp a&a&EE>>;;HNe**1 --&JGR,-O1.AA큤A큤A큤A큤A큤A큤A큤A큤f0f0f0f0f0f0f0f0f0f0f0d_f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0d_f0d_f0f0f0d_f0d_f0f0f0f0f0d_d_d_d_d_d_d_d_f0d_f0d_07018247313020f98cfcdbc8749e5a9b634dbeb245cdc0c58e935376f8f7a36fe0468c864c0b48f38812eaa59277b9b8974b9e60a7b3591919553046b04202d8439ac21b6419ef53a678bb26dbd4b9c9870269402c6a739a5503fb862e585e2401ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546bb98a3c68818b5aff188167d29dae479df63f127e29905d70e1b81ac0d017ccfd8b972240287d73a24ac2a18c9833e53c693e255931bf5bc6dc2342758512c004ec04b2cde3ebf3fc6e65626c9ea263201b7257cbe1128d30042bf530f4518b746806ff17d65bdb7a5be0d3ba4df21fa8b5cbbdfd4f8e72c104eb8fbd0c0f035f0b51cac185f25e62a53d45e935c69713fb02bf10cc299cb978a6b0116848e1200b51cac185f25e62a53d45e935c69713fb02bf10cc299cb978a6b0116848e1201f00aa4bd728b265184364e1f31a2bbc0ccee35aab6af1ad7eacd307af3ece751f00aa4bd728b265184364e1f31a2bbc0ccee35aab6af1ad7eacd307af3ece758ebf26a1c666bd16ff100945aea507206ddf0749d01edf52ef4d40c2379135908ebf26a1c666bd16ff100945aea507206ddf0749d01edf52ef4d40c237913590e8ddbf31b598944eb3fc0b536229d51cb385e155c6eeabcb60376af78460627ce8ddbf31b598944eb3fc0b536229d51cb385e155c6eeabcb60376af78460627c770b8d2f4400b45d6687f3d405af14f0dfa1ee48bd98d657a4cbccaab2f64843770b8d2f4400b45d6687f3d405af14f0dfa1ee48bd98d657a4cbccaab2f6484330dda473cf09f3a4aa6a9678df28a8d5a459c057e903bdaf214d0c9f6830842f30dda473cf09f3a4aa6a9678df28a8d5a459c057e903bdaf214d0c9f6830842fdf56c3c1c8484afd0537c7d24d4c89ab5231e1922852815add1ad43019542d59df56c3c1c8484afd0537c7d24d4c89ab5231e1922852815add1ad43019542d590861b4a4468acf1e97c64728343f6a6682267224d07999ec9230a17db3200f520861b4a4468acf1e97c64728343f6a6682267224d07999ec9230a17db3200f52f84505030187cccfb1364c8066ae6fa05a41877441dd9100e14827de310f42f0f84505030187cccfb1364c8066ae6fa05a41877441dd9100e14827de310f42f0561e38ac55f3b64c428d6ee0176c1aabc4f244bde65ca5fc47034607df5b85bec2945fba337b189b84139c071e8df0103568279a2fcd1231b089b28a60817c652c4306e933fe4e5595e681828ec71fdca6e7b4aa123b5a3cc82403d8bf03d93e2c4306e933fe4e5595e681828ec71fdca6e7b4aa123b5a3cc82403d8bf03d93e999ba24d22aae17a707920c3d87e13e11dc0c6d69af904b4b6911676932e0334e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855a98a5a3f28274b9741064e88751db7880c1beda6dac3feaddc3c202f9ff4816ca98a5a3f28274b9741064e88751db7880c1beda6dac3feaddc3c202f9ff4816c9733bb34cb548154732f5a628b9792a6eee3b280f4725fdecc988520cb359a329733bb34cb548154732f5a628b9792a6eee3b280f4725fdecc988520cb359a32d85fb1511649ce8d3add87bed912c9db0726511a5b29e0332b0a62c2ceb9740b3e609ea0a5eaab765b0ad094a4a1f0c05048bfd0d7313fda9f5c9ddb8ab6f2604fe42e54cb0c783890124f164accccaf354983fd7800c7929261dd45885d97593173ca3ffa0b6ecb9bba510bff5af171829277915e11fc847cd366e4ee800e97982d75c28f38974d2eda8d374d1357ecce48b4101b3d428a5e0252171037e466e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855b4a2e9aae3d841d691745470068e603ce8b4ea8bfc502272e57d4fb62d10efc56de91b76917f0f706d17e3e16cf9c0f4f35a65a23c90a2209772d3083e4599892f3016536127b7fd824196917ded111ca76c8501f4ab9fd632ece0a0e5ffebd4eb31a0c5a4fb09b8a4e32055d25c1e5f9c358a2752fef3cd720213d1ccfee241/etc/alternatives/normalizer@rootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootpython-charset-normalizer-3.1.0-150400.9.7.2.src.rpmpython311-charset-normalizer@@     /bin/sh/bin/sh/usr/bin/python3.11python(abi)rpmlib(CompressedFileNames)rpmlib(FileDigests)rpmlib(PartialHardlinkSets)rpmlib(PayloadFilesHavePrefix)rpmlib(PayloadIsXz)update-alternatives3.113.0.4-14.6.0-14.0.4-14.0-15.2-14.14.3dBzd 3c:@cZrc%b)b֜b aa@aa@azal@`O@`"y@^˳@^.]4@]@]{]i]fldmueller@suse.comdmueller@suse.comyarunachalam@suse.comyarunachalam@suse.comdmueller@suse.comcode@bnavigator.dedmueller@suse.comdmueller@suse.comdmueller@suse.comdmueller@suse.comdmueller@suse.comdmueller@suse.commcepl@suse.commardnh@gmx.depgajdos@suse.comjayvdb@gmail.compgajdos@suse.commcalabkova@suse.commcalabkova@suse.comtchvatal@suse.comtchvatal@suse.comtchvatal@suse.comjayvdb@gmail.com- add sle15_python_module_pythons (jsc#PED-68)- update to 3.1.0: * Argument `should_rename_legacy` for legacy function `detect` and disregard any new arguments without errors (PR #262) * Removed Support for Python 3.6 (PR #260) * Optional speedup provided by mypy/c 1.0.1- Update to 3.0.1 Fixed Multi-bytes cutter/chunk generator did not always cut correctly (PR #233) Changed Speedup provided by mypy/c 0.990 on Python >= 3.7- Update to 3.0.0 Added * Extend the capability of explain=True when cp_isolation contains at most two entries (min one), will log in details of the Mess-detector results Support for alternative language frequency set in charset_normalizer.assets.FREQUENCIES Add parameter language_threshold in from_bytes, from_path and from_fp to adjust the minimum expected coherence ratio normalizer --version now specify if current version provide extra speedup (meaning mypyc compilation whl) * Changed Build with static metadata using 'build' frontend Make the language detection stricter Optional: Module md.py can be compiled using Mypyc to provide an extra speedup up to 4x faster than v2.1 * Fixed CLI with opt --normalize fail when using full path for files TooManyAccentuatedPlugin induce false positive on the mess detection when too few alpha character have been fed to it Sphinx warnings when generating the documentation * Removed Coherence detector no longer return 'Simple English' instead return 'English' Coherence detector no longer return 'Classical Chinese' instead return 'Chinese' Breaking: Method first() and best() from CharsetMatch UTF-7 will no longer appear as "detected" without a recognized SIG/mark (is unreliable/conflict with ASCII) Breaking: Class aliases CharsetDetector, CharsetDoctor, CharsetNormalizerMatch and CharsetNormalizerMatches Breaking: Top-level function normalize Breaking: Properties chaos_secondary_pass, coherence_non_latin and w_counter from CharsetMatch Support for the backport unicodedata2- update to 2.1.1: * Function `normalize` scheduled for removal in 3.0 * Removed useless call to decode in fn is_unprintable (#206)- Clean requirements: We don't need anything- update to 2.1.0: * Output the Unicode table version when running the CLI with `--version` * Re-use decoded buffer for single byte character sets * Fixing some performance bottlenecks * Workaround potential bug in cpython with Zero Width No-Break Space located * in Arabic Presentation Forms-B, Unicode 1.1 not acknowledged as space * CLI default threshold aligned with the API threshold from * Support for Python 3.5 (PR #192) * Use of backport unicodedata from `unicodedata2` as Python is quickly catching up, scheduled for removal in 3.0- update to 2.0.12: * ASCII miss-detection on rare cases (PR #170) * Explicit support for Python 3.11 (PR #164) * The logging behavior have been completely reviewed, now using only TRACE and DEBUG levels- update to 2.0.10: * Fallback match entries might lead to UnicodeDecodeError for large bytes sequence * Skipping the language-detection (CD) on ASCII- update to 2.0.9: * Moderating the logging impact (since 2.0.8) for specific environments * Wrong logging level applied when setting kwarg `explain` to True- update to 2.0.8: * Improvement over Vietnamese detection * MD improvement on trailing data and long foreign (non-pure latin) * Efficiency improvements in cd/alphabet_languages * call sum() without an intermediary list following PEP 289 recommendations * Code style as refactored by Sourcery-AI * Minor adjustment on the MD around european words * Remove and replace SRTs from assets / tests * Initialize the library logger with a `NullHandler` by default * Setting kwarg `explain` to True will add provisionally * Fix large (misleading) sequence giving UnicodeDecodeError * Avoid using too insignificant chunk * Add and expose function `set_logging_handler` to configure a specific StreamHandler- require lower-case name instead of breaking build- Use lower-case name of prettytable package- Update to version 2.0.7 * Addition: bento Add support for Kazakh (Cyrillic) language detection * Improvement: sparkle Further improve inferring the language from a given code page (single-byte). * Removed: fire Remove redundant logging entry about detected language(s). * Improvement: zap Refactoring for potential performance improvements in loops. * Improvement: sparkles Various detection improvement (MD+CD). * Bugfix: bug Fix a minor inconsistency between Python 3.5 and other versions regarding language detection. - Update to version 2.0.6 * Bugfix: bug Unforeseen regression with the loss of the backward-compatibility with some older minor of Python 3.5.x. * Bugfix: bug Fix CLI crash when using --minimal output in certain cases. * Improvement: sparkles Minor improvement to the detection efficiency (less than 1%). - Update to version 2.0.5 * Improvement: sparkles The BC-support with v1.x was improved, the old staticmethods are restored. * Remove: fire The project no longer raise warning on tiny content given for detection, will be simply logged as warning instead. * Improvement: sparkles The Unicode detection is slightly improved, see #93 * Bugfix: bug In some rare case, the chunks extractor could cut in the middle of a multi-byte character and could mislead the mess detection. * Bugfix: bug Some rare 'space' characters could trip up the UnprintablePlugin/Mess detection. * Improvement: art Add syntax sugar __bool__ for results CharsetMatches list-container. - Update to version 2.0.4 * Improvement: sparkle Adjust the MD to lower the sensitivity, thus improving the global detection reliability. * Improvement: sparkle Allow fallback on specified encoding if any. * Bugfix: bug The CLI no longer raise an unexpected exception when no encoding has been found. * Bugfix: bug Fix accessing the 'alphabets' property when the payload contains surrogate characters. * Bugfix: bug pencil2 The logger could mislead (explain=True) on detected languages and the impact of one MBCS match (in #72) * Bugfix: bug Submatch factoring could be wrong in rare edge cases (in #72) * Bugfix: bug Multiple files given to the CLI were ignored when publishing results to STDOUT. (After the first path) (in #72) * Internal: art Fix line endings from CRLF to LF for certain files. - Update to version 2.0.3 * Improvement: sparkles Part of the detection mechanism has been improved to be less sensitive, resulting in more accurate detection results. Especially ASCII. #63 Fix #62 * Improvement: sparklesAccording to the community wishes, the detection will fall back on ASCII or UTF-8 in a last-resort case. - Update to version 2.0.2 * Bugfix: bug Empty/Too small JSON payload miss-detection fixed. * Improvement: sparkler Don't inject unicodedata2 into sys.modules - Update to version 2.0.1 * Bugfix: bug Make it work where there isn't a filesystem available, dropping assets frequencies.json. * Improvement: sparkles You may now use aliases in cp_isolation and cp_exclusion arguments. * Bugfix: bug Using explain=False permanently disable the verbose output in the current runtime #47 * Bugfix: bug One log entry (language target preemptive) was not show in logs when using explain=True #47 * Bugfix: bug Fix undesired exception (ValueError) on getitem of instance CharsetMatches #52 * Improvement: wrench Public function normalize default args values were not aligned with from_bytes #53 - Update to version 2.0.0 * Performance: zap 4x to 5 times faster than the previous 1.4.0 release. * Performance: zap At least 2x faster than Chardet. * Performance: zap Accent has been made on UTF-8 detection, should perform rather instantaneous. * Improvement: back The backward compatibility with Chardet has been greatly improved. The legacy detect function returns an identical charset name whenever possible. * Improvement: sparkle The detection mechanism has been slightly improved, now Turkish content is detected correctly (most of the time) * Code: art The program has been rewritten to ease the readability and maintainability. (+Using static typing) * Tests: heavy_check_mark New workflows are now in place to verify the following aspects: Performance, Backward- Compatibility with Chardet, and Detection Coverage in addition# to currents tests. (+CodeQL) * Dependency: heavy_minus_sign This package no longer require anything when used with Python 3.5 (Dropped cached_property) * Docs: pencil2 Performance claims have been updated, the guide to contributing, and the issue template. * Improvement: sparkle Add --version argument to CLI * Bugfix: bug The CLI output used the relative path of the file(s). Should be absolute. * Deprecation: red_circle Methods coherence_non_latin, w_counter, chaos_secondary_pass of the class CharsetMatch are now deprecated and scheduled for removal in v3.0 * Improvement: sparkle If no language was detected in content, trying to infer it using the encoding name/alphabets used. * Removal: fire Removed support for these languages: Catalan, Esperanto, Kazakh, Baque, Volapük, Azeri, Galician, Nynorsk, Macedonian, and Serbocroatian. * Improvement: sparkle utf_7 detection has been reinstated. * Removal: fire The exception hook on UnicodeDecodeError has been removed. - Update to version 1.4.1 * Improvement: art Logger configuration/usage no longer conflict with others #44 - Update to version 1.4.0 * Dependency: heavy_minus_sign Using standard logging instead of using the package loguru. * Dependency: heavy_minus_sign Dropping nose test framework in favor of the maintained pytest. * Dependency: heavy_minus_sign Choose to not use dragonmapper package to help with gibberish Chinese/CJK text. * Dependency: wrench heavy_minus_sign Require cached_property only for Python 3.5 due to constraint. Dropping for every other interpreter version. * Bugfix: bug BOM marker in a CharsetNormalizerMatch instance could be False in rare cases even if obviously present. Due to the sub-match factoring process. * Improvement: sparkler Return ASCII if given sequences fit. * Performance: zap Huge improvement over the larges payload. * Change: fire Stop support for UTF-7 that does not contain a SIG. (Contributions are welcome to improve that point) * Feature: sparkler CLI now produces JSON consumable output. * Dependency: Dropping PrettyTable, replaced with pure JSON output. * Bugfix: bug Not searching properly for the BOM when trying utf32/16 parent codec. * Other: zap Improving the package final size by compressing frequencies.json.- version update to 1.3.9 * Bugfix: bug In some very rare cases, you may end up getting encode/decode errors due to a bad bytes payload #40 * Bugfix: bug Empty given payload for detection may cause an exception if trying to access the alphabets property. #39 * Bugfix: bug The legacy detect function should return UTF-8-SIG if sig is present in the payload. #38- Switch to PyPI source - Add Suggests: python-unicodedata2 - Remove executable bit from charset_normalizer/assets/frequencies.json - Update to v1.3.6 * Allow prettytable 2.0 - from v1.3.5 * Dependencies refactor and add support for py 3.9 and 3.10 * Fix version parsing- %python3_only -> %python_alternative- Update to 1.3.4 * Improvement/Bugfix : False positive when searching for successive upper, lower char. (ProbeChaos) * Improvement : Noticeable better detection for jp * Bugfix : Passing zero-length bytes to from_bytes * Improvement : Expose version in package * Bugfix : Division by zero * Improvement : Prefers unicode (utf-8) when detected * Apparently dropped Python2 silently- Update to 1.3.0 * Backport unicodedata for v12 impl into python if available * Add aliases to CharsetNormalizerMatches class * Add feature preemptive behaviour, looking for encoding declaration * Add method to determine if specific encoding is multi byte * Add has_submatch property on a match * Add percent_chaos and percent_coherence * Coherence ratio based on mean instead of sum of best results * Using loguru for trace/debug <3 * from_byte method improved- Update to 1.1.1: * from_bytes parameters steps and chunk_size were not adapted to sequence len if provided values were not fitted to content * Sequence having lenght bellow 10 chars was not checked * Legacy detect method inspired by chardet was not returning * Various more test updates- Update to 0.3: * Improvement on detection * Performance loss to expect * Added --threshold option to CLI * Bugfix on UTF 7 support * Legacy detect(byte_str) method * BOM support (Unicode mostly) * Chaos prober improved on small text * Language detection has been reviewed to give better result * Bugfix on jp detection, every jp text was considered chaotic- Fix the tarball to really be the one published by upstream- Initial spec for v0.1.8/bin/sh/bin/shh04-ch2a 1714483961  !"#$$&'()**,,./01234567893.1.0-150400.9.7.2  normalizernormalizernormalizer-3.11charset_normalizercharset_normalizer-3.1.0-py3.11.egg-infoPKG-INFOSOURCES.txtdependency_links.txtentry_points.txtrequires.txttop_level.txt__init__.py__pycache____init__.cpython-311.opt-1.pyc__init__.cpython-311.pycapi.cpython-311.opt-1.pycapi.cpython-311.pyccd.cpython-311.opt-1.pyccd.cpython-311.pycconstant.cpython-311.opt-1.pycconstant.cpython-311.pyclegacy.cpython-311.opt-1.pyclegacy.cpython-311.pycmd.cpython-311.opt-1.pycmd.cpython-311.pycmodels.cpython-311.opt-1.pycmodels.cpython-311.pycutils.cpython-311.opt-1.pycutils.cpython-311.pycversion.cpython-311.opt-1.pycversion.cpython-311.pycapi.pyassets__init__.py__pycache____init__.cpython-311.opt-1.pyc__init__.cpython-311.pyccd.pycli__init__.py__pycache____init__.cpython-311.opt-1.pyc__init__.cpython-311.pycnormalizer.cpython-311.opt-1.pycnormalizer.cpython-311.pycnormalizer.pyconstant.pylegacy.pymd.pymodels.pypy.typedutils.pyversion.pypython311-charset-normalizerREADME.mdpython311-charset-normalizerLICENSE/etc/alternatives//usr/bin//usr/lib/python3.11/site-packages//usr/lib/python3.11/site-packages/charset_normalizer-3.1.0-py3.11.egg-info//usr/lib/python3.11/site-packages/charset_normalizer//usr/lib/python3.11/site-packages/charset_normalizer/__pycache__//usr/lib/python3.11/site-packages/charset_normalizer/assets//usr/lib/python3.11/site-packages/charset_normalizer/assets/__pycache__//usr/lib/python3.11/site-packages/charset_normalizer/cli//usr/lib/python3.11/site-packages/charset_normalizer/cli/__pycache__//usr/share/doc/packages//usr/share/doc/packages/python311-charset-normalizer//usr/share/licenses//usr/share/licenses/python311-charset-normalizer/-fmessage-length=0 -grecord-gcc-switches -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector-strong -funwind-tables -fasynchronous-unwind-tables -fstack-clash-protection -gobs://build.suse.de/SUSE:Maintenance:33601/SUSE_SLE-15-SP4_Update/ea68d3c4a58aa5b47d8939dd889e5208-python-charset-normalizer.SUSE_SLE-15-SP4_Updatedrpmxz5noarch-suse-linuxemptyPython script, ASCII text executabledirectoryHTML document, UTF-8 Unicode text, with very long linesASCII textPython script, UTF-8 Unicode text executable  !"#$RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR٣xr5python311-unicodedata2utf-857af7dde765d4b0311b4d42b796087c426835870abc541e3562155bce8738833?7zXZ !t/]"k%a IQI WGB֨wµ4A/]oLt} ~}V@LiwECsh/hH1{ Z̍\>=.3ĸ֠vEt_GYЈ,20u I#)PT!& }O:M/ @ocymr{^ H^CG4,,0kSD_kGd7S=&2>R2#'7(7Zt`xA^jg {P:9{=Ѐ(Ua.ʸ~>9RZ&d U^="2G:$ 7kRHtuZ0 (yix,H_*a//Hȟ ω94!zbEy ܂!Uz߳6"pʍ Ba(q,",`JL#h_~OYyh_2XSR Yް瑳Bpi^.<ӫ-1ЦZeZJ Z*CQM*"_)ȚW^ ƌ `kh(Ң8KvUrPԇ?gS\;K}"ѬuO9-_If;pI)!ѝʵ3Nv|RVO-lϥwUhVQk >ݭOcCdQ.ޠ7 G^n=Tz\G>C)Ṓ EDﬕW߫ǖ@!2Y塇&'laB|Iͦ(xl` AVBNuJ飒WQ ~ **8Wѥ*xP͍ -5XK9(6[ ᡵC\I oM೜& _ןN~Z+V]lݴ Bdς,tD9f&Z<L&[h 9Q96,6^L;\nR,4팠·uPvooUJP"%֩xMڧlwA{i|R3 a:RoOrcb|kX-9%M͈6%Gmad}KUP2nEY]CNu.=7{[͆Pm5'\.ݾ}$ 1˔w@FvV!9տQSAÉU'sj3{<CSV{b 7yآl#9"HEvt;)+%el,`E&LM9@6J(_#tR济e1K#d?%:j%3ߞD>߫PNk{)g~r ^ /u YZ