sqlite#
Summary#
|
|
|
An async interface for caching objects in a SQLite database. |
|
Same as |
Module Contents#
- class SQLiteBackend(cache_name='aiohttp-cache', use_temp=False, **kwargs)#
Bases:
aiohttp_client_cache.backends.base.CacheBackend
Async cache backend for SQLite (requires aiosqlite)
Reading is fast, saving is a bit slower. It can store a large amount of data with low memory usage. The path to the database file will be
<cache_name>
(or<cache_name>.sqlite
if no file extension is specified)- __init__(cache_name='aiohttp-cache', use_temp=False, **kwargs)#
- Parameters
cache_name (
str
) – Database filenameuse_temp (
bool
) – Store database in a temp directory (e.g.,/tmp/http_cache.sqlite
). Note: ifcache_name
is an absolute path, this option will be ignored.expire_after (Union[None, int, float, str, datetime.datetime, datetime.timedelta]) – Time after which a cache entry will be expired; see Cache Expiration for possible formats
urls_expire_after (Dict[str, Union[None, int, float, str, datetime.datetime, datetime.timedelta]]) – Expiration times to apply for different URL patterns
allowed_codes (tuple) – Only cache responses with these status codes
allowed_methods (tuple) – Only cache requests with these HTTP methods
include_headers (bool) – Cache requests with different headers separately
ignored_params (Iterable) – Request parameters to be excluded from the cache key
cache_control (bool) – Use Cache-Control response headers
filter_fn (Callable) – function that takes a
aiohttp.ClientResponse
object and returns a boolean indicating whether or not that response should be cached. Will be applied to both new and previously cached responsessecret_key (Union[Iterable, str, bytes]) – Optional secret key used to sign cache items for added security
salt (Union[str, bytes]) – Optional salt used to sign cache items
serializer – Custom serializer that provides
loads
anddumps
methodstimeout (float) – Specifies how long the connection should wait for the lock to go away until raising an exception.
detect_types (int) – Can be set to any combination of
PARSE_DECLTYPES
andPARSE_COLNAMES
to turn type detection on for custom types.isolation_level (str) – Transaction isolation level. Use
None
for autocommit mode, or one of:“DEFERRED”, “IMMEDIATE”, “EXCLUSIVE”
check_same_thread (bool) – If True, only the creating thread may use the connection. If False, the returned connection may be shared across multiple threads.
factory (Type) – Custom subclass of
sqlite3.Connection
used to create connectionscached_statements (int) – The number of statements that are cached internally for the connection
uri (bool) – Interpret database path as a URI, to allow specifying additional options
- Parameters
cache_name (
str
) –use_temp (
bool
) –expire_after (Union[None, int, float, str, datetime.datetime, datetime.timedelta]) –
urls_expire_after (Dict[str, Union[None, int, float, str, datetime.datetime, datetime.timedelta]]) –
allowed_codes (tuple) –
allowed_methods (tuple) –
include_headers (bool) –
ignored_params (Iterable) –
cache_control (bool) –
filter_fn (Callable) –
timeout (float) –
detect_types (int) –
isolation_level (str) –
check_same_thread (bool) –
factory (Type) –
cached_statements (int) –
uri (bool) –
- class SQLiteCache(filename, table_name='aiohttp-cache', use_temp=False, **kwargs)#
Bases:
aiohttp_client_cache.backends.base.BaseCache
An async interface for caching objects in a SQLite database.
Example
>>> # Store data in two tables under the 'testdb' database >>> d1 = SQLiteCache('testdb', 'table1') >>> d2 = SQLiteCache('testdb', 'table2')
- Parameters
filename (
str
) – Database filenametable_name (
str
) – Table nameuse_temp (
bool
) – Store database in a temp directory (e.g.,/tmp/http_cache.sqlite
). Note: ifcache_name
is an absolute path, this option will be ignored.kwargs – Additional keyword arguments for
sqlite3.connect()
- _abc_impl = <_abc_data object>#
- async _init_db(db)#
Create table if this is the first connection opened, and set fast save if possible
- Parameters
db (
Connection
) –
- bulk_commit()#
Context manager used to speedup insertion of big number of records
Example
>>> cache = SQLiteCache('test') >>> async with cache.bulk_commit(): ... for i in range(1000): ... await cache.write(f'key_{i}', str(i))
- async bulk_delete(keys)#
Delete item(s) from the cache. Does not raise an error if the item is missing.
- Parameters
keys (
set
) –
- async clear()#
Delete all items from the cache
- async delete(key)#
Delete an item from the cache. Does not raise an error if the item is missing.
- Parameters
key (
str
) –
- async keys()#
Get all keys stored in the cache
- Return type
- property lock#
- async read(key)#
Read an item from the cache. Returns
None
if the item is missing.
- async values()#
Get all values stored in the cache
- Return type
- async write(key, item)#
Write an item to the cache
- Parameters
key (
str
) –item (
Union
[CachedResponse
,bytes
,str
,None
,memoryview
]) –
- class SQLitePickleCache(filename, table_name='aiohttp-cache', use_temp=False, **kwargs)#
Bases:
aiohttp_client_cache.backends.sqlite.SQLiteCache
Same as
SqliteCache
, but pickles values before saving- _abc_impl = <_abc_data object>#
- async read(key)#
Read an item from the cache. Returns
None
if the item is missing.
- async values()#
Get all values stored in the cache
- Return type
- async write(key, item)#
Write an item to the cache