Comments (13)
Alright, SGTM. Convenience constructor as a compromise? i.e.
AsyncCache.noDuration()
Maybe AsyncCache.ephemeral()
?
from async.
Woohoo!
from async.
I'm not sure I totally understand your example. You have three calls to expensiveCall()
, but the comments indicate that "CALL MADE" is only printed twice. Does this mean the synchronizer would drop calls on the floor if a call was already in progress?
from async.
What I mean is that it would just use the response of the (pending) future on calls being made before that future completes.
from async.
That seems like it would produce a high risk of race conditions, since the difference between "use the value being computed" and "compute a new value from scratch" could just be a few microtasks—and presumably that distinction matters, or the user would just use AsyncMemoizer
and always have the computed value available.
Maybe I'd understand better with a real-world example—what's the motivating use-case here?
from async.
Sure, so let's say you have an "expensive operation*"
*Read a large file from the disk, make a network request, spawn an isolate and do work
To your users, you'd like to present a simplified API:
abstract class ExpensiveWorkService {
Future<String> compute();
}
If a user calls compute
multiple times though, you don't want to read/fetch/spawn multiple times.
To simplify your implementation, you could use SynchronizedFuture
:
class _ExpensiveWorkServiceImpl implements ExpensiveWorkService {
Future<String> _doCompute() => ...
@override
Future<String> compute() => new SynchronizedFuture(_doCompute);
}
So if a user does something like:
// I'm bound to a UI button that is clickable.
onClickCompute() async {
printResults(expensiveWorkService.compute());
}
You can rest assured that this call will only be made once if they click multiple times but the previous call has not yet completed. In this way, it works like AsyncMemoizer
. In fact, you could implement it with one and clear out the memoizer and re-create it over and over.
class _ExpensiveWorkServiceImpl implements ExpensiveWorkService {
AsyncMemoizer _doComputeMemoizer;
Future<String> _doCompute() => ...
@override
Future<String> compute() {
if (_doComputeMemoizer == null) {
_doComputeMemoizer = new AsyncMemoizer();
_doComputeMemoizer.runOnce(_doCompute).then((_) {
_doComputeMemoizer = null;
});
}
return _doComputeMemoizer.runOnce(_doCompute);
}
}
This pattern appears a lot internally, and usually be using lots of completers or other nonsense, so I'd like to have something I can recommend to people without lots of boilerplate.
from async.
In that case, why not just use an AsyncMemoizer
? Why do you want to re-do the expensive work if a request comes in too late?
from async.
So if you are fetching resources from the server or reading from disk, it's pretty likely they will eventually change - if they won't, async memoizer is great. Otherwise let's say I'm fetching a list of online users... memoizer isn't so great.
(This is different than a cache, which users might do - it's to prevent making multiple expensive calls or using network/file resources when they will all return the same result - not for transactions obviously)
from async.
How do you know they'll all return the same result? That's the fundamental thing that worries me about this API. It seems like it's using the time it takes to do the computation as a heuristic for how long the computation will stay fresh, but there's no guarantee that that heuristic is accurate. If it's wrong, you've got a race condition where a call can return bogus results depending on how fast a previous call completed.
I'd be a lot more comfortable with an API that requires users to be explicit about the freshness of the result. Something like this:
class AsyncCache<T> {
/// Creates a cache that invalidates its contents after [duration] has passed.
AsyncCache(Duration duration);
/// Returns a cached value or runs [callback] to compute a new one.
///
/// If [callback] has been run recently enough, returns its previous return
/// value. Otherwise, runs [callback] and returns its new return value.
Future<T> fetch(Future<T> callback());
}
class Example {
Future<String> expensiveCall() => _expensiveCallCache.fetch(() async {
print('CALL MADE');
return 'Hello';
});
final _expensiveCallCache = new AsyncCache<String>(new Duration(seconds: 2));
}
from async.
I think that would be a reasonable compromise, thanks for the feedback.
My only suggestion is to allow duration to be optional (default to Duration.ZERO
) for users that just don't want two outgoing requests at the same time (i.e. above use case). We could clearly document that behavior and encourage a meaningful duration, though.
How do you feel about a similar API for streams? (Separate PR)
EDIT: My only other question would be how users are expected to test - this likely would not honor fake async unless we use a timer to "clear" the freshness flag, which is not my favorite idea.
from async.
My only suggestion is to allow duration to be optional (default to
Duration.ZERO
) for users that just don't want two outgoing requests at the same time (i.e. above use case). We could clearly document that behavior and encourage a meaningful duration, though.
We can allow Duration.ZERO
, but I don't think it should be the default. I want to make sure all the users of this API think about their freshness guarantees, so at least they're aware of the potential for race conditions.
How do you feel about a similar API for streams? (Separate PR)
That sounds reasonable. We could probably do this as another method on AsyncCache
. I think we'd probably want to start counting from the stream's "done" event, though—that way there's more consistency between calling fetch(() => stream.toList())
and fetchStream(() => stream)
.
from async.
Alright, SGTM. Convenience constructor as a compromise? i.e. AsyncCache.noDuration()
*
*Open to alternative names
+1 to another method of AsyncCache
for streams
from async.
+1
from async.
Related Issues (20)
- Change the default of `propagateCancel` argument in CancelableOperation.then HOT 2
- Reset method for AsyncMemoizer HOT 1
- Make it easier to safely hold a reference that can cancel an operation without holding the whole operation HOT 1
- Clarify `StreamQueue.next` will fail just after `hasNext` in API document.
- Consider to make second invocation of `streamQueue.hasNext` be postponed concluding the result until the first invocation of `q.next` , unless the stream is closed. HOT 6
- Deprecate StreamQueue.hasNext and StreamQueue.next
- Future.wait() but with Records HOT 4
- Bad State error while trying to reject a StreamQueueTransaction
- Dart 3 incompatibilty: `DelegatingStream<T> extends StreamView` but `StreamView` is `base class` HOT 5
- Add `whereNotNull` for `Stream`
- CancelableOperation value is not propagating errors, so they cannot be catched and app is crashing HOT 3
- There should be cancellable versions of Stream.firstWhere etc.
- Migrate the Result type to sealed classes HOT 2
- Make `ParallelWaitError` Include Error Details HOT 1
- Async Cache is caching exceptions HOT 5
- AsyncMemoizer is caching exceptions HOT 2
- Add an API wrapping runZonedGuarded that surfaces the first error HOT 4
- Clarify `CancelableOperation` docs HOT 4
- Inconsistent behavior of `Stream.listen` on broadcast streams HOT 1
- [Proposal] Add a CountDownLatch implementation
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from async.