-
Notifications
You must be signed in to change notification settings - Fork 3.8k
refactor internal type system #9638
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@Nullable | ||
String getName(); | ||
|
||
String getTypeName(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need this one ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, good point, this isn't really used currently. Initially this was the only method I added to PostAggregator
in this branch, along with getFinalizedTypeName
to AggregatorFactory
. However since the main users of getTypeName
on AggregatorFactory
(besides ComplexMetricSerde
) and the only usages of these new methods, are converting to ValueType
, I pivoted near completion to add the additional methods that just directly return the needed thing.
I left it in because I sort of have a dream that one day ComplexMetrics
will be rebranded into ComplexTypes
to spiritually decouple it from Aggregators
, but will still provide a mapping of all the getTypeName
strings for aggs (and now maybe postaggs, and anything else?) that have ValueType.COMPLEX
for getType
to a serde as is now, forming a centralized registry for all complex types. Whether this is actually useful or not I am not yet totally certain, maybe just using jackson like we do now for everything that isn't part of a segment is enough; I think I will need to think a bit more about this, and maybe get deeper into some of the follow-up work before it will become fully apparent.
Should I remove it for now since it isn't really being used other than as a vessel to convert to ValueType
? I still find it sort of useful to make it super obvious what actual type is being spit out by the PostAggregator
is since its significantly more descriptive than ValueType.COMPLEX
, and easier than examining the output of the compute
method closely, but I guess javadocs could accomplish the same thing.
At minimum I was still planning to add javadocs to this interface, so could document how it isn't really used, if you're cool with leaving it in for now, or I can remove. I'll try to think about this some more as well as I try to wrap up this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think, it could be added in the PR that starts using it so as to remove any confusion unless there is a distinct advantage to adding it now that I missed :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have removed typeName
from PostAggregators
since it isn't necessary.
Between the changes here, changes in #10277, and some other recent changes I have made, it is almost starting to look to me that perhaps aggs and postaggs should just consider supplying ColumnCapabilities
of their own instead of just types, but I'll save that for future consideration.
*/ | ||
public String getFinalizedTypeName() | ||
{ | ||
return getTypeName(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we add a blurb recommending this method be explicitly overridden by complex AggregatorFactory impls as the default here is likely wrong for those.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might actually be better as an abstract method so that all AggregatorFactory
must explicitly specify the finalized type, but I was somewhat worried about it being a bit disruptive. I think it is worth discussing if abstract would be better, but yeah at least will add javadocs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually, this one can also be removed and we just need to have getFinalizedType()
and it having default impl return getType()
.
I didn't suggest making it abstract because it does work correctly for all of aggregators dealing with primitives, so default is ok specially because "wrong" behavior doesn't appear to cause any correctness issue for query processing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think maybe if we remove both getFinalizedTypeName
from the AggregatorFactory
and getTypeName
from the PostAggregator
, we should also maybe consider renaming AggregatorFactory.getTypeName
to be AggregatorFactory.getComplexTypeName
or something similar, and ensure it is only still called for getting a ComplexMetricSerde
for the aggregator. I'll look into this when I get back to this branch, but a rename like that might be sort of disruptive to extension writers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ensure it is only still called for getting a ComplexMetricSerde for the aggregator.
that sounds ok, with ValueType getType()
in there, AggregatorFactory.getTypeName()
is only used to find right ComplexMetricSerde
object .. you are right that changing the name might be disruptive, so adding the right requirement in javadoc would be a good compromise.... or maybe you would find a better alternative.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've transitioned AggregatorFactory.getTypeName
to be used exclusively for complex type serde lookup, and remove all of the other type name functions. All AggregatorFactory
implementations now must explicitly implement getType
instead of a default that tries to convert it into a ValueType
enum from the result of getTypeName
, and I removed getTypeName
from all non-complex agg factories in favor of a default implementation that throws an exception (to ensure this method isn't called inappropriately).
Since getType
is now abstract, this PR is sort of disruptive to agg extensions, so it might be also ok to change getTypeName
to getComplexTypeName
since a code change will be required anyway.
Alternatively, I could restore the default implementation of getType
that uses the output of getTypeName
to translate into a ValueType
if this is too much. I removed it as much so that I would go through and provide explicit implementations for all existing aggregators using errors to hunt them all down.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fwiw, I'm in favor of requiring aggregators to explicitly implement these functions — especially if 90% of people would be fine with the default! That means it would be really easy for the 10% that need to override it to forget to do so. In general it's good to make it easy to do things right, not easy to forget important stuff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, getType
and getFinalizedType
are now both abstract
, and I also have renamed getTypeName
to be getComplexTypeName
since it didn't seem much more disruptive to rename an existing method on top of having to implement the 2 new methods.
|
||
String t = aggFactory.getTypeName(); | ||
final ValueType type = aggFactory.getType(); | ||
final String typeName = aggFactory.getTypeName(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: these can be moved to else clause. also aggFactory.getTypeName()
can just be inlined for complex clause.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, good point, will move 👍
final AggregatorFactory agg = getAggregator(metric, aggs, i); | ||
final ValueType type = agg.getType(); | ||
final String typeName = agg.getTypeName(); | ||
final byte metricNullability = in.readByte(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: same
that looks ok at first glance but leads to a question. as an aggregation extension writer, if my aggregator intermediate data type is |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great in concept. We really need this type information to get better performance for expressions!
Limited review: I only looked at ValueType, AggregatorFactory, PostAggregator, ExpressionPostAggregator, and RowSignature.
@Nullable | ||
String getName(); | ||
|
||
ValueType getType(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add javadocs.
{ | ||
// this is wrong, replace with Expr output type based on the input types once it is available | ||
// but treat as string for now | ||
return ValueType.STRING; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why STRING? (Instead of COMPLEX or whatever.)
What bad things could potentially happen if the type returned by this method is wrong? (The javadoc should explain this, ideally.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is a post-aggregator, probably not a lot will go wrong in most cases. We will potentially predict an incorrect query result signature as having strings instead of whatever the actual expression output type is. I guess it should technically probably return null
for now since it is truly unknown, so it will appear in the signature that way. I'm going to consider making this change.
COMPLEX
could be ok to use here in some circumstances, but would not be correct i think for example, if the row signature for a subquery is computed and then the column from the subquery post-agg is used as an input to another expression (since expression selectors do not currently handle complex inputs). There might be other cases too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've changed this to be null
for now, rather than incorrectly calling it string, so it will continue to be treated as unknown, but did go ahead and wire things up for a follow-up PR to be able to add output type inference to expressions during post-agg decoration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also i did not explicitly change the output contract of PostAggregator.getType
to @Nullable
, because I don't think that long term it should be allowed to be that, but could change it to that for now while it is true, and remove in the future once the expression post-agg knows its output type.
Alternatively, I could have added an explicit output type json property, similar to expression virtual column declaration, but I think we want to deprecate those once output type inference is in place, so decided not to do this at this time, but am open to including it if anyone thinks it is necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it'd be good to make it @Nullable
for now, with a note that we'd like it to stop being nullable in the future when X/Y/Z things are done. We want to make sure that in the meantime, we don't forget to check it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I did end up adding it as @Nullable
, I forgot to update this comment. It is missing the note on it maybe not being like that in the future, can add.
* Optional.ofNullable(GuavaUtils.getEnumIfPresent(ValueType.class, aggregator.getTypeName())) | ||
* .orElse(ValueType.COMPLEX); | ||
* </pre> | ||
* If you need a ValueType enum corresponding to this aggregator, use {@link #getTypeName} instead. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a @throws IllegalStateException if getType() != ValueType.COMPLEX
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added
*/ | ||
public ValueType getFinalizedType() | ||
{ | ||
return getType(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, it would be better for this to be abstract, since this makes it too easy for people to forget to override it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed to abstract
COMPLEX, | ||
DOUBLE_ARRAY, | ||
LONG_ARRAY, | ||
STRING_ARRAY; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add javadocs documenting these types.
Especially important things to document include:
- When do we use
STRING
and when do we useSTRING_ARRAY
? (Multivalue strings are type STRING, even though they behave kind of like arrays. How do we explain this coherently?) - What does COMPLEX mean and how can you get more information about something that is COMPLEX?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am still working on javadocs here and other modified places to provide guidance on how types are and should be used, and how to deal with array types until they are fully propagated through the codebase.
With regards to COMPLEX
, I'm starting to think that in the future we might want to modify RowSignature
to potentially have the ability to store the complex type name, similar to what is available for aggregator factory, and what #10277 adds to the ColumnCapabilities
of complex type columns it creates. We might actually want a lighter weight ColumnSignature
type that is a subset of ColumnCapabilities
that RowSignature
can be composed of, rather than it's strict ValueType
mapping it currently has, but I'm not certain if it is necessary yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will it be too complicated for multi value strings to have their own type different from single value string?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure yet, I'm still thinking on the best way to live in a world with the current STRING
that can have single or multi values, communicated through ColumnCapabilities.hasMultipleValues
, and STRING_ARRAY
which is explicitly always multi-valued but can currently only be produced via expressions.
I think it depends on how we want to encode this information for RowSignature
to make available to the broker and higher layers of the query engines. There might be room for a new ValueType
to use explicitly for STRING
which are multi-valued if we want to keep RowSignature
light (and effectively coerce it back to STRING
when translating the signature back into ColumnCapabilities
for things like the row selectors the broker uses), though between the changes in #10219 which also adds a want to be able to encode in the RowSignature
which columns can have null values, making a richer RowSignature
is probably the right way forward, which could potentially make a separate ValueType
for multi-value strings not necessary.
I don't think we want to treat the multi-value strings as STRING_ARRAY
because I think we probably want to reserve it for if/when we add true array typed columns, so that engines like group by and top-n can process them separately than the funny way we handle existing multi-value strings (which aggregate on individual values, basically equivalent to UNNEST
in SQL) and instead do it in a way that is compatible with SQL array types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added a bunch of javadocs trying to advise on what each ValueType represents and how it may be used in the query engines, I hope it is sufficient to provide guidance for complex type/agg/post-agg implementors to think about how the thing is going to be used, but I'm sure that I'm probably leaving out a lot of details, and it might be somewhat brittle to keep in sync with reality as the engines evolve.
return this.equals(ValueType.STRING) || isNumeric(this); | ||
} | ||
|
||
public boolean isComplex() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With this logic, DOUBLE_ARRAY.isComplex()
is true. This seems weird. I would think only COMPLEX is complex. Please add javadocs to the method and maybe consider renaming it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have removed this method in favor of using isPrimitive
directly
if (type.equals(aggregator.getFinalizedType())) { | ||
add(aggregator.getName(), type); | ||
} else { | ||
// Use null if the type depends on whether or not the aggregator is finalized, since |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of using null here, we should have different signatures for finalized and nonfinalized rows. Perhaps the row signature builder should have addAggregators(List<AggregatorFactory> aggregators, boolean finalize)
and the things that call it should be given knowledge about whether they're going to be finalizing or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that this is a nice thing to do in the future so that we can have complete/accurate RowSignature
everywhere, as I alluded to in the PR description, but it is a bit much for this PR and not especially needed at this point.
This pull request has been marked as stale due to 60 days of inactivity. It will be closed in 4 weeks if no further activity occurs. If you think that's incorrect or this pull request should instead be reviewed, please simply write any comment. Even if closed, you can still revive the PR at any time or discuss it on the [email protected] list. Thank you for your contributions. |
go away stalebot, will get back to this soon |
This issue is no longer marked as stale. |
* {@link #deserialize} and the type accepted by {@link #combine}. However, it is *not* necessarily the same type | ||
* returned by {@link #finalizeComputation}. | ||
*/ | ||
public abstract ValueType getType(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thinking out loud. how about having a class such as AggregatorOutputType
which can contain combine
and finalize
types. Aggregators just override one method which can be AggregatorOutputType getType()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
over time, more information can be added to this class rather than having methods in the interface. one example is complex type name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already have ColumnCapabilities
which is a richer but a bit more segment focused version of a thing that provides details about a given column, and #10277 expands that to also include complex type name, so I think if we are going to use a consolidated type we should consider using that since it is used pretty heavily throughout the engines to help determine the correct way to process columns (or a new interface that provides a subset of the functionality that ColumnCapabilities
can extend as mentioned in another thread). I need to think a bit harder about what would be most useful for RowSignature
to have on hand, which is currently the type used to serve similar functionality to higher levels of the engine what ColumnCapabilities
provides at lower levels, before making a change like this though.
I'm not sure if it is useful to combine the intermediary and finalized types in one place, since callers should typically only need one or the other, depending on the caller, as mentioned in PR description and this thread #9638 (comment). We currently haven't quite added enough information through for most callers to be able to provide this context quite yet though, so it also needs a bit further thought.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In addition to the line comments, could you discuss a bit about how you approached testing this change to make sure nothing funky is going to happen? What risks did you see and how do the existing/new tests speak to them?
this.type = typeInfo; | ||
ValueType valueType = factory.getType(); | ||
|
||
if (valueType.isPrimitive()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously this only did createSimpleNumericColumnCapabilities if the type was numeric, now it includes strings too. Is that intentional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, that is a good point, this should probably check isNumeric
. However, should the else
be an else if (ValueType.COMPLEX.equals(valueType))
to make sure it is actually is? I'm not sure the previous behavior of treating STRING
as a COMPLEX
was quite correct, it doesn't seem like it. Instead it should maybe handle STRING
(and arrays) separately, though I'm not quite sure which capabilities it should have or if it should be an error case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about throwing an error that this kind of AggregatorFactory isn't supported at ingestion time?
I don't think there's a use case for it with any of the builtin aggregators. So we can worry about it when one comes up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed to handle only numeric and complex types, and left a comment about what needs done if the new else
case is encountered.
if (aggregators != null && aggregators.containsKey(fieldName)) { | ||
type = aggregators.get(fieldName).getType(); | ||
} else { | ||
type = ValueTypes.defaultAggregationType(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not keep it null here? What are the pros and cons of null vs ValueTypes.defaultAggregationType()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I think it should be null
, this is a leftover from before i made PostAggregator.getType
be annotated @Nullable
.
theFinalizer = aggregators.get(fieldName)::finalizeComputation; | ||
} else { | ||
theFinalizer = Function.identity(); | ||
finalizedType = ValueTypes.defaultAggregationType(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar comment to FieldAccessPostAggregator: why not keep it null here? What are the pros and cons of null vs ValueTypes.defaultAggregationType()
?
Despite touching a lot of files, I feel like this PR currently has a relatively light impact; besides the refactoring, the primary differences are that array types produced by expressions and some post aggregators are now acknowledged by Are you more curious/worried/hesitant/etc about the inclusion of array types, or the changes of including more type information in the If it is the former, it feels safe to me after my investigation and testing and I haven't yet found issue with the array types existing. Grouping will effectively be treated the same as a complex type, and so invalid. ValueType handling is most often part of either a switch or if/else chain where the caller handles explicit types, and explode for things it does not understand. In fact the only way these array types can appear right now are from the few post aggregators that can produce them, since expression selectors and If it is the latter, it would be trivial to add an escape hatch to at least the row signature changes, in the form of config The primary tests added are around confirming that row signatures computed for a query matches the updated expectations, since that seems like the most noticeable change. On top of this, I've done some examining of how |
I was asking what you thought 🙂 |
Ah, I guess where I was going with my meandering comment is that I'm not really worried about stuff at this point, but wasn't sure if there is anything I'm not thinking of. The 2 potential areas of risks are for places that are handling I think the array types are a good inclusion to |
Hmm, in your patch would anything currently return one of the new ValueTypes? If so, we could check that we have some tests that use a subquery that now reports one of the new ValueTypes in a signature, and make sure that the outer query still runs properly. |
A handful of post-aggregators are currently the only thing that can return double array types. While adding synthetic tests for what happens in various scenarios when these post-aggregators appear as part of a subquery (the subquery row signature/inline datasource contains an array type) uncovered a strange difference in behavior. Prior to this patch, grouping on these array columns would result in them being treated as I looked over the Its worth noting that I think this is currently only possible with native JSON subqueries. All of the post-aggregator implementations which can produce arrays are typed as |
Thanks for adding the new tests. Taking a look at them, I agree with your assessment that we should be changing the behavior at some point, but it's OK to keep it as-is for now. It's good that we have the tests, so now we'll know if behavior is altered due to future changes. |
* better type tracking: add typed postaggs, finalized types for agg factories * more javadoc * adjustments * transition to getTypeName to be used exclusively for complex types * remove unused fn * adjust * more better * rename getTypeName to getComplexTypeName * setup expression post agg for type inference existing * more javadocs * fixup * oops * more test * more test * more comments/javadoc * nulls * explicitly handle only numeric and complex aggregators for incremental index * checkstyle * more tests * adjust * more tests to showcase difference in behavior * timeseries longsum array
* Druid Avatica - Handle escaping of search characters correctly (#10040) Fix Avatica based metadata queries by appending ESCAPE '\' clause to the LIKE expressions * IntelliJ inspection and checkstyle rule for "Collection.EMPTY_* field accesses replaceable with Collections.empty*()" (#9690) * IntelliJ inspection and checkstyle rule for "Collection.EMPTY_* field accesses replaceable with Collections.empty*()" * Reverted checkstyle rule * Added tests to pass CI * Codestyle * fix docs (#9114) Co-authored-by: tomscut <[email protected]> * global table only if joinable (#10041) * global table if only joinable * oops * fix style, add more tests * Update sql/src/test/java/org/apache/druid/sql/calcite/schema/DruidSchemaTest.java * better information schema columns, distinguish broadcast from joinable * fix javadoc * fix mistake Co-authored-by: Jihoon Son <[email protected]> * Coordinator loadstatus API full format does not consider Broadcast rules (#10048) * Coordinator loadstatus API full format does not consider Broadcast rules * address comments * fix checkstyle * minor optimization * address comments * Remove changes from #9114 (#10050) * Create packed core partitions for hash/range-partitioned segments in native batch ingestion (#10025) * Fill in the core partition set size properly for batch ingestion with dynamic partitioning * incomplete javadoc * Address comments * fix tests * fix json serde, add tests * checkstyle * Set core partition set size for hash-partitioned segments properly in batch ingestion * test for both parallel and single-threaded task * unused variables * fix test * unused imports * add hash/range buckets * some test adjustment and missing json serde * centralized partition id allocation in parallel and simple tasks * remove string partition chunk * revive string partition chunk * fill numCorePartitions for hadoop * clean up hash stuffs * resolved todos * javadocs * Fix tests * add more tests * doc * unused imports * Fix join filter rewrites with nested queries (#10015) * Fix join filter rewrites with nested queries * Fix test, inspection, coverage * Remove clauses from group key * Fix import order Co-authored-by: Gian Merlino <[email protected]> * fix topn on string columns with non-sorted or non-unique dictionaries (#10053) * fix topn on string columns with non-sorted or non-unique dictionaries * fix metadata tests * refactor, clarify comments and code, fix ci failures * Add safeguard to make sure new Rules added are aware of Rule usage in loadstatus API (#10054) * Add safeguard to make sure new Rules added are aware of Rule usuage in loadstatus API * address comments * address comments * add tests * SketchAggregator.updateUnion should handle null inside List update object (#10055) * fix docs error in hadoop-based part (#9907) * fix docs error: google to azure and hdfs to http * fix docs error: indexSpecForIntermediatePersists of tuningConfig in hadoop-based batch part * fix docs error: logParseExceptions of tuningConfig in hadoop-based batch part * fix docs error: maxParseExceptions of tuningConfig in hadoop-based batch part * minor rework of topn algorithm selection for clarity and more javadocs (#10058) * minor refactor of topn engine algorithm selection for clarity * adjust * more javadoc * change default number of segment loading threads (#9856) * change default number of segment loading threads * fix docs * missed file * min -> max for segment loading threads Co-authored-by: Dylan <[email protected]> * retry 500 and 503 errors against kinesis (#10059) * retry 500 and 503 errors against kinesis * add test that exercises retry logic * more branch coverage * retry 500 and 503 on getRecords request when fetching sequence numberu Co-authored-by: Harshpreet Singh <[email protected]> * Druid user permissions (#10047) * Druid user permissions apply in the console * Update index.md * noting user warning in console page; some minor shuffling * noting user warning in console page; some minor shuffling 1 * touchups * link checking fixes * Updated per suggestions * Fix HyperUniquesAggregatorFactory.estimateCardinality null handling to respect output type (#10063) * fix return type from HyperUniquesAggregator/HyperUniquesVectorAggregator * address comments * address comments * Enable query vectorization by default (#10065) * Enable query vectorization by default * update docs * Optimize protobuf parsing for flatten data (#9999) * optimize for protobuf parsing * fix import error and maven dependency * add unit test in protobufInputrowParserTest for flatten data * solve code duplication (remove the log and main()) * rename 'flatten' to 'flat' to make it clearer Co-authored-by: xionghuilin <[email protected]> * fix dimension names for jvm monitor metrics (#10071) * update avatica to handle additional character sets over jdbc (#10074) * update avatica to handle additional character sets over jdbc * update license yaml, fix test * oops * Fix balancer strategy (#10070) * fix server overassignment * fix random balancer strategy, add more tests * comment * added more tests * fix forbidden apis * fix typo * fix dropwizard emitter jvm bufferpoolName metric (#10075) * fix dropwizard emitter jvm bufferpoolName metric * fixes * Allow append to existing datasources when dynamic partitioning is used (#10033) * Fill in the core partition set size properly for batch ingestion with dynamic partitioning * incomplete javadoc * Address comments * fix tests * fix json serde, add tests * checkstyle * Set core partition set size for hash-partitioned segments properly in batch ingestion * test for both parallel and single-threaded task * unused variables * fix test * unused imports * add hash/range buckets * some test adjustment and missing json serde * centralized partition id allocation in parallel and simple tasks * remove string partition chunk * revive string partition chunk * fill numCorePartitions for hadoop * clean up hash stuffs * resolved todos * javadocs * Fix tests * add more tests * doc * unused imports * Allow append to existing datasources when dynamic partitioing is used * fix test * checkstyle * checkstyle * fix test * fix test * fix other tests.. * checkstyle * hansle unknown core partitions size in overlord segment allocation * fail to append when numCorePartitions is unknown * log * fix comment; rename to be more intuitive * double append test * cleanup complete(); add tests * fix build * add tests * address comments * checkstyle * Fix missing temp dir for native single_dim (#10046) * Fix missing temp dir for native single_dim Native single dim indexing throws a file not found exception from InputEntityIteratingReader.java:81. This MR creates the required temporary directory when setting up the PartialDimensionDistributionTask. The change was tested on a Druid cluster. After installing the change native single_dim indexing completes successfully. * Fix indentation * Use SinglePhaseSubTask as example for creating the temp dir * Move temporary indexing dir creation in to TaskToolbox * Remove unused dependency Co-authored-by: Morri Feldman <[email protected]> * More prominent instructions on code coverage failure (#10060) * More prominent instructions on code coverage failure * Update .travis.yml * Add NonnullPair (#10013) * Add NonnullPair * new line * test * make it consistent * Add integration tests for SqlInputSource (#10080) * Add integration tests for SqlInputSource * make it faster * ensure ParallelMergeCombiningSequence closes its closeables (#10076) * ensure close for all closeables of ParallelMergeCombiningSequence * revert unneeded change * consolidate methods * catch throwable instead of exception * fix MaterializedView gropuby query return arry result by default (#9936) * fix bug:MaterializedView gropuby query return map result by default * add unit test * add unit test * add unit test * fix bug:MaterializedView gropuby query return map result by default * add unit test * add unit test * add unit test * update pr * update pr Co-authored-by: xiangqiao <[email protected]> * Fix NPE when brokers use custom priority list (#9878) * fix query memory leak (#10027) * fix query memory leak * rollup ./idea * roll up .idea * clean code * optimize style * optimize cancel function * optimize style * add concurrentGroupTest test case * add test case * add unit test * fix code style * optimize cancell method use * format code * reback code * optimize cancelAll * clean code * add comment * Segment timeline doesn't show results older than 3 months (#9956) * Segment timeline doesn't show results older than 3 months * Adoption testing patch for web segment timeline view and also refactoring default time config * Filter http requests by http method (#10085) * Filter http requests by http method Add a config that allows a user which http methods to allow against their Druid server. Druid will only accept http requests with the method: GET, PUT, POST, DELETE and OPTIONS. If a Druid admin wants to allow other methods, they can do so by using the ServerConfig#allowedHttpMethods config. If a Druid user would like to disallow OPTIONS, this can be done by changing the AuthConfig#allowUnauthenticatedHttpOptions config * Exclude OPTIONS from always supported HTTP methods Add HEAD as an allowed method for web console e2e tests * fix docs * fix security IT * Actually fix the web console e2e tests * Ignore icode coverage for nitialization classes * code review * Move shardSpec tests to core (#10079) * Move shardSpec tests to core * checkstyle * inject object mapper for testing * unused import * Fix native batch range partition segment sizing (#10089) * Fix native batch range partition segment sizing Fixes #10057. Native batch range partitioning was only considering the partition dimension value when grouping rows instead of using all of the row's partition values. Thus, for schemas with multiple dimensions, the rollup was overestimated, which would cause too many dimension values to be packed into the same range partition. The resulting segments would then be overly large (and not honor the target or max partition sizes). Main changes: - PartialDimensionDistributionTask: Consider all dimension values when grouping row - RangePartitionMultiPhaseParallelIndexingTest: Regression test by having input with rows that should roll up and rows that should not roll up * Use hadoop & native hash ingestion row group key * Fix nullhandling exception (#10095) Co-authored-by: Atul Mohan <[email protected]> * Make 0.19 brokers compatible with 0.18 router (#10091) * Make brokers backwards compatible In 0.19, Brokers gained the ability to serve segments. To support this change, a `BROKER` ServerType was added to `druid.server.coordination`. Druid nodes prior to this change do not know of this new server type and so they would fail to deserialize this node's announcement. This change makes it so that the broker only announces itself if the segment cache is configured on the broker. It is expected that a Druid admin will only configure the segment cache on the broker once the cluster has been upgraded to a version that supports a broker using the segment cache. * make code nicer * Add tests * Ignore icode coverage for nitialization classes * Revert "Ignore icode coverage for nitialization classes" This reverts commit aeec0c2ac2b07c1b9262e32201913c7194167271. * code review * Correct the position of the double quotation in distinctcount.md file (#10094) ``` "dimensions": "[sample_dim]" ``` should be ``` "dimensions": ["sample_dim"] ``` * QueryCountStatsMonitor can be injected in the Peon (#10092) * QueryCountStatsMonitor can be injected in the Peon This change fixes a dependency injection bug where there is a circular dependency on getting the MonitorScheduler when a user configures the QueryCountStatsMonitor to be used. * fix tests * Actually fix the tests this time * Information schema doc update (#10081) * add docs for IS_JOINABLE and IS_BROADCAST to INFORMATION_SCHEMA docs * fixes * oops * revert noise * missed one * spellbot * Remove payload field from table sys.segment (#9883) * remove payload field from table sys.segments * update doc * fix test * fix CI failure * add necessary fields * fix doc * fix comment * Web console: allow link overrides for docs, and more (#10100) * link overrides * change doc version * fix snapshots * Enabling Static Imports for Unit Testing DSLs (#331) (#9764) * Enabling Static Imports for Unit Testing DSLs (#331) Co-authored-by: mohammadshoaib <[email protected]> * Feature 8885 - Enabling Static Imports for Unit Testing DSLs (#435) * Enabling Static Imports for Unit Testing DSLs * Using suppressions checkstyle to allow static imports only in the UTs Co-authored-by: mohammadshoaib <[email protected]> * Removing the changes in the checkstyle because those are not needed Co-authored-by: mohammadshoaib <[email protected]> * Prevent unknown complex types from breaking DruidSchema refresh (#9422) * Update web address to datasketches.apache.org (#10096) * Join filter pre-analysis simplifications and sanity checks. (#10104) * Join filter pre-analysis simplifications and sanity checks. - At pre-analysis time, only compute pre-analysis for the innermost root query, since this is the one that will run on the join that involves the base datasource. Previously, pre-analyses were computed for multiple levels of the query, some of which were unnecessary. - Remove JoinFilterPreAnalysisGroup and join query level gathering code, since they existed to support precomputation of multiple pre-analyses. - Embed JoinFilterPreAnalysisKey into JoinFilterPreAnalysis and use it to sanity check at processing time that the correct pre-analysis was done. Tangentially related changes: - Remove prioritizeAndLaneQuery functionality from LocalQuerySegmentWalker. The computed priority and lanes were not being used. - Add "getBaseQuery" method to DataSourceAnalysis to support identification of the proper subquery for filter pre-analysis. * Fix compilation errors. * Adjust tests. * Filter on metrics doc (#10087) * add note about filter on metrics to filter docs * edit doc to include having and filtered aggregator links * Fix UnknownTypeComplexColumn#makeVectorObjectSelector * Fix RetryQueryRunner to actually do the job (#10082) * Fix RetryQueryRunner to actually do the job * more javadoc * fix test and checkstyle * don't combine for testing * address comments * fix unit tests * always initialize response context in cachingClusteredClient * fix subquery * address comments * fix test * query id for builders * make queryId optional in the builders and ClusterQueryResult * fix test * suppress tests and unused methods * exclude groupBy builder * fix jacoco exclusion * add tests for builders * address comments * don't truncate * Closing yielder from ParallelMergeCombiningSequence should trigger cancellation (#10117) * cancel parallel merge combine sequence on yielder close * finish incomplete comment * Update core/src/test/java/org/apache/druid/java/util/common/guava/ParallelMergeCombiningSequenceTest.java Fixes checkstyle Co-authored-by: Jihoon Son <[email protected]> * Revert "Fix UnknownTypeComplexColumn#makeVectorObjectSelector" (#10121) This reverts commit 7bb7489afc7a2cc496be93ae69681b6ab13a7c66. * update links datasketches.github.io to datasketches.apache.org (#10107) * update links datasketches.github.io to datasketches.apache.org * now with more apache * oops * oops * Fix Stack overflow with infinite loop in ReduceExpressionsRule of HepProgram (#10120) * Fix Stack overflow with SELECT ARRAY ['Hello', NULL] * address comments * fixes for ranger docs (#10109) * Fix UnknownComplexTypeColumn#makeVectorObjectSelector. Add a warning … (#10123) * Fix UnknownComplexTypeColumn#makeVectorObjectSelector. Add a warning message to indicate failure in deserializing. * support Aliyun OSS service as deep storage (#9898) * init commit, all tests passed * fix format Signed-off-by: frank chen <[email protected]> * data stored successfully * modify config path * add doc * add aliyun-oss extension to project * remove descriptor deletion code to avoid warning message output by aliyun client * fix warnings reported by lgtm-com * fix ci warnings Signed-off-by: frank chen <[email protected]> * fix errors reported by intellj inspection check Signed-off-by: frank chen <[email protected]> * fix doc spelling check Signed-off-by: frank chen <[email protected]> * fix dependency warnings reported by ci Signed-off-by: frank chen <[email protected]> * fix warnings reported by CI Signed-off-by: frank chen <[email protected]> * add package configuration to support showing extension info Signed-off-by: frank chen <[email protected]> * add IT test cases and fix bugs Signed-off-by: frank chen <[email protected]> * 1. code review comments adopted 2. change schema from 'aliyun-oss' to 'oss' Signed-off-by: frank chen <[email protected]> * add license info Signed-off-by: frank chen <[email protected]> * fix doc Signed-off-by: frank chen <[email protected]> * exclude execution of IT testcases of OSS extension from CI Signed-off-by: frank chen <[email protected]> * put the extensions under contrib group and add to distribution * fix names in test cases * add unit test to cover OssInputSource * fix names in test cases * fix dependency problem reported by CI Signed-off-by: frank chen <[email protected]> * Clarify change in behavior for druid.server.maxSize (#10105) * Clarify maxSize docs * Add info about maxSize Co-authored-by: Atul Mohan <[email protected]> * Add DimFilter.toOptimizedFilter(), ensure that join filter pre-analysis operates on optimized filters (#10056) * Ensure that join filter pre-analysis operates on optimized filters, add DimFilter.toOptimizedFilter * Remove aggressive equality check that was used for testing * Use Suppliers.memoize * Checkstyle * Fix CachingClusteredClient when querying specific segments (#10125) * Fix CachingClusteredClient when querying specific segments * delete useless test * roll back timeout * Remove unsupported task types in doc (#10111) * VersionedIntervalTimeline: Fix thread-unsafe call to "lookup". (#10130) * bump version to 0.20.0-SNAPSHOT (#10124) * AbstractOptimizableDimFilter should be public (#10142) * mask secrets in MM task command log (#10128) * mask secrets in MM task command log * unit test for masked iterator * checkstyle fix * Update Jetty to 9.4.30.v20200611. (#10098) * Update Jetty to 9.4.30.v20200611. This is the latest version currently available in the 9.4.x line. * Various adjustments. * Class name fixes. * Remove unused HttpClientModule code. * Add coverage suppressions. * Another coverage suppression. * Fix wildcards. * ui: fix missing columns during Transform step (#10086) Co-authored-by: egor-ryashin <[email protected]> * Add availability and consistency docs. (#10149) * Add availability and consistency docs. Describes transactional ingestion and atomic replacement. Also, this patch deletes some bad advice from the javadocs for SegmentTransactionalInsertAction. * Fix missing word. * Update dictionary for spell check (#10152) * Fix avg sql aggregator (#10135) * new average aggregator * method to create count aggregator factory * test everything * update other usages * fix style * fix more tests * fix datasketches tests * Reduce memory footprint of integration test by not starting unneeded containers (#10150) * Reduce memory footprint of integration test * fix README * fix README * fix error in script * fix security IT * Add integration tests for all InputFormat (#10088) * Add integration tests for Avro OCF InputFormat * Add integration tests for Avro OCF InputFormat * add tests * fix bug * fix bug * fix failing tests * add comments * address comments * address comments * address comments * fix test data * reduce resource needed for IT * remove bug fix * fix checkstyle * add bug fix * Follow-up for RetryQueryRunner fix (#10144) * address comments; use guice instead of query context * typo * QueryResource tests * address comments * catch queryException * fix spell check * Fix documentation for Kinesis fetchThreads. (#10156) * Fix documentation for Kinesis fetchThreads The default was changed in #9819, but the documentation wasn't updated. * Add 'procs' to spelling. * renamed authenticationChain to authenticatorChain (#10143) * Fix flaky tests in DruidCoordinatorTest (#10157) * Fix flaky tests in DruidCoordinatorTest * Imporve fail msg * Fix flaky tests in DruidCoordinatorTest * Update ambari-metrics-common to version 2.6.1.0.0 (#10165) * Switch to apache version of ambari-metrics-common * Add test * Fix intellij inspection * Fix intellij inspection * Do not echo back username on auth failure (#10097) * Do not echo back username on auth failure * use bad username * Remove username from exception messages * fix tests * fix the tests * hopefully this time * this time the tests work * fixed this time * fix * upgrade to Jetty 9.4.30 * Unknown users echo back Unauthorized * fix * fix website build (#10172) * fix mvn website build to use mvn supplied nodejs, fix broken redirects, move block from custom.css to custom.scss so will be correctly generated * sidebar * fix lol * split web-console e2e-tests from unit tests (#10173) * split web-console e2e-test from unit test * fix stuff * smaller change * oops * Fix formatting in druid-pac4j documentation (#10174) Superfluous column broke table formatting. * Add additional properties for Kafka AdminClient and consumer from test config file (#10137) * Add kafka test configs from file for AdminClient and consumer * review comment * Add groupBy limitSpec to queryCache key (#10093) * Add groupBy limitSpec to queryCache key * Only add limitSpec to cache key if pushdown is set to true * review comment * Add validation for authenticator and authorizer name (#10106) * Add validation for authorizer name * fix deps * add javadocs * Do not use resource filters * Fix BasicAuthenticatorResource as well * Add integration tests * fix test * fix * JettyTest.testNumConnectionsMetricHttp is rarely flaky (#10169) * Change color of Run button for native queries (#10170) * Change color of Run button for native queries When a user tries to run a native query, change the color of the button to Druid's secondary color to indicate that the user is not running a SQL query. Before this change, the web-console would indicate this by changing the text of the button from Run (SQL queries) to Rune (native queries). Rune could be confusing to users as this appears to be a typo. * Update web-console/src/views/query-view/run-button/run-button.scss * Update web-console/src/views/query-view/run-button/run-button.scss * Update web-console/src/views/query-view/run-button/run-button.scss * code review * Add integration tests for Appends (#10186) * append test * add append IT * fix checkstyle * fix checkstyle * Remove parallel * fix checkstyle * fix * fix * address comments * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * update release process guide to include web-console versions (#10176) * Report missing segments when there is no segment for the query datasource in historicals (#10199) * Report missing segments when there is no segment for the query datasource in historicals * test * missing part for test * another test * Fix ITSqlInputSourceTest (#10194) * Fix ITSqlInputSourceTest.java * Fix ITSqlInputSourceTest.java * Fix ITSqlInputSourceTest.java * fix * fix * fix * fix * fix * fix * fix * fix * include staged maven artifacts in example vote thread (#10200) * ingestion and tutorial doc update (#10202) * Fix sys.servers table to not throw NPE and handle brokers/indexers/peons properly for broadcast segments (#10183) * Fix sys.servers table to not throw NPE and handle brokers/indexers/peons properly for broadcast segments * fix tests and add missing tests * revert null handling fix * unused import * move out util methods from DiscoveryDruidNode * Add integration tests for query retry on missing segments (#10171) * Add integration tests for query retry on missing segments * add missing dependencies; fix travis conf * address comments * Integration tests extension * remove unused dependency * remove druid_main * fix java agent port * Update RoaringBitmap to 0.9.0 (#9987) * Update QueryView to use latest DruidQueryToolkit (#10201) * Update to latest DruidQueryToolkit * add THEN keyword * do not crash on invalid JSON * add explicit example for jdbc query context on connection properties (#10182) * add explicit example for jdbc query context on connection properties * make comment clearer * Update sql.md * Update sql.md * Suppress CVE-2020-7692 (#10214) Druid is not a native app, so this CVE should not apply. * Fix timeseries query constructor when postAggregator has an expression reading timestamp result column (#10198) * Fix timeseries query constructor when postAggregator has an expression reading timestamp result column * fix npe * Fix postAgg referencing timestampResultField and add a test for it * fix test * doc * revert doc * Cluster wide default query context setting (#10208) * Cluster wide default query context setting * Cluster wide default query context setting * Cluster wide default query context setting * add docs * fix docs * update props * fix checkstyle * fix checkstyle * fix checkstyle * update docs * address comments * fix checkstyle * fix checkstyle * fix checkstyle * fix checkstyle * fix checkstyle * fix NPE * Add segment pruning for hash based shard spec (#9810) * Add segment pruning for hash based partitioning * Update doc * Add additional test * Address comments * Fix unit test failure Co-authored-by: Jian Wang <[email protected]> * Support unit on byte-related properties (#10203) * support unit suffix on byte-related properties * add doc * change default value of byte-related properites in example files * fix coding style * fix doc * fix CI * suppress spelling errors * improve code according to comments * rename Bytes to HumanReadableBytes * add getBytesInInt to get value safely * improve doc * fix problem reported by CI * fix problem reported by CI * resolve code review comments * improve error message * improve code & doc according to comments * fix CI problem * improve doc * suppress spelling check errors * fill out missing test coverage for druid-datasketches postaggs (#9730) * fill out missing test coverage for druid-datasketches postaggs * fixup * fixup merge * oops * oops again * Add vectorization support for the longMin aggregator. (#10211) * Fix minor formatting in docs. * Add Nullhandling initialization for test to run from IDE. * Vectorize longMin aggregator. - A new vectorized class for the vectorized long min aggregator. - Changes to AggregatorFactory to support vectorize functionality. - Few changes to schema evolution test to add LongMinAggregatorFactory. * Add longSum to the supported vectorized aggregator implementations. * Add MIN() long min to calcite query test that can vectorize. * Add simple long aggregations test. * Fixup formatting per checkstyle guide. * fixup and add more tests for long min aggregator. * Override test for groupBy since timestamps are handled differently. * Null compatibility check in test. * Review comment: Add a test case to LongMinAggregationTest. * change search filter to includes (#10141) * Web console: Improve retention rules dialog in all sorts of ways (#10226) * improve ret rules * tidy up tests * Add "offset" parameter to GroupBy query. (#10235) * Add "offset" parameter to GroupBy query. It works by doing the query as normal and then throwing away the first "offset" number of rows on the broker. * Stabilize GroupBy sorts. * Fix inspections. * Fix suppression. * Fixups. * Move TopNSequence to druid-core. * Addl comments. * NumberedElement equals verification. * Changes from review. * Combine InDimFilter, InFilter. (#10119) * Combine InDimFilter, InFilter. There are two motivations: 1. Ensure that when HashJoinSegmentStorageAdapter compares its Filter to the original one, and it is an "in" type, the comparison is by reference and does not need to check deep equality. This is useful when the "in" filter is very large. 2. Simplify things. (There isn't a great reason for the DimFilter and Filter logic to be separate, and combining them reduces some duplication.) * Fix test. * improve JSON paste (#10256) * Set default server.maxsize to the sum of segment cache (#10255) * Default server.maxsize * Remove maxsize refs from config Co-authored-by: Atul Mohan <[email protected]> * Vectorization support for long, double, float min & max aggregators. (#10260) * LongMaxVectorAggregator support and test case. * DoubleMinVectorAggregator and test cases. * DoubleMaxVectorAggregator and unit test. * FloatMinVectorAggregator and FloatMaxVectorAggregator. * Documentation update to include the other vector aggregators. * Bug fix. * checkstyle formatting fixes. * CalciteQueryTest cases update. * Separate test classes for FloatMaxAggregation and FloatMniAggregation. * remove the cannotVectorize for float max/min aggregator in test. * Tests in GroupByQueryRunner, GroupByTimeseriesQueryRunner and TimeseriesQueryRunner. * Make stale bot less aggressive (#10261) * fix bug with expressions on sparse string realtime columns without explicit null valued rows (#10248) * fix bug with realtime expressions on sparse string columns * fix test * add comment back * push capabilities for dimensions to dimension indexers since they know things * style * style * fixes * getting a bit carried away * missed one * fix it * benchmark build fix * review stuffs * javadoc and comments * add comment * more strict check * fix missed usaged of impl instead of interface * Fix broken sampler for re-indexing (#10196) * Fix broken sampler for re-indexer When re-indexing a Druid datasource, the web-console would generate an invalid inputFormat since the type is not specified. * code review * Fix two id-over-maxId errors in StringDimensionIndexer. (#10245) 1) lookupId could return IDs beyond maxId if called with a recently added value. 2) getRow could return an ID for null beyond maxId, if null was recently encountered in a dimension that initially didn't appear at all. (In this case, the dictionary ID for null can be > 0). Also add a comment explaining how this stuff is supposed to work. * Clarify documentation on dimensions, dimensionExclusions. (#10265) In particular: exclusions are ignored if dimensions are set. * Fix javadoc mistake in DefaultLimitSpec. (#10269) Javadoc for getLimit should say it's a limit, not an offset. * Web console: fix json input (#10271) * fix json input * tidy up * add error extraction test * Allow forceLimitPushDown in SQL (#10253) * Allow forceLimitPushDown in SQL * fix test * fix test * review comments * fix test * add hasNulls to ColumnCapabilities, ColumnAnalysis (#10219) * add isNullable to ColumnCapabilities, ColumnAnalysis * better builder * fix segment metadata queries in integration tests * adjustments * cleanup * fix spotbugs * treat unknown as true in segmentmetadata * rename to hasNulls, add docs * fixup * test the dim indexer selector isNull fix for numeric columns * fixes * oof * Add "offset" parameter to the Scan query. (#10233) * Add "offset" parameter to the Scan query. It works by doing the query as normal and then throwing away the first "offset" number of rows on the broker. * Fix constructor call. * Fix up JSONs. * Fix call to ScanQuery. * Doc update. * Fix javadocs. * Spotbugs, LGTM suppressions. * Javadocs. * Fix suppression. * Stabilize Scan query result order, add tests. * Update LGTM comment. * Fixup. * Test different batch sizes too. * Nicer tests. * Fix comment. * remove DruidLeaderClient.goAsync(..) that does not follow redirect. Replace its usage by DruidLeaderClient.go(..) with InputStreamFullResponseHandler (#9717) * remove DruidLeaderClient.goAsync(..) that does not follow redirect. Replace its usage by DruidLeaadereClient.go(..) with InputStreamFullResponseHandler * remove ByteArrayResponseHolder dependency from JsonParserIterator * add UT to cover lines in InputStreamFullResponseHandler * refactor SystemSchema to reduce branches * further reduce branches * Revert "add UT to cover lines in InputStreamFullResponseHandler" This reverts commit 330aba3dd98ce15a13cd6ca607824bc07036ee81. * UTs for InputStreamFullResponseHandler * remove unused imports * Update Kafka dependencies to 2.6.0 (#10286) * update Kafka dependencies to Kafka 2.6.0 * switch to Scala 2.13 build of Kafka * update integration tests * update Kafka tutorial * typo fix from hear to here (#10292) Should be `There are no other changes that need to be made here` * Add note about aggregations on floats (#10285) * Add note about aggreations on floats Floating point math is known to be unstable. Due to the way aggregators work across segments it's possible for the same query operating on the same data to produce slightly different results. The same problem exists with any aggregators that are not commutative since the merge order across segments is not guaranteed. * Also talk about doubles * Apply suggestions from code review * Don't log the entire task spec (#10278) * Don't log the entire task spec * fix lgtm * fix serde * address comments and add tests * fix tests * remove unnecessary codes * fix connectionId issue with JDBC prepared statement queries and router (#10272) * fix router jdbc prepared statement connectionId issue * column metadata too * style * remove tls * try tls again * add keystore stuffs * use keyManager password * add unit test * simplify * Fix CombiningFirehose compatibility (#10264) * Fix CombiningFirehose * Add integration test * Fix path * Add full datasource name * Fix input location Co-authored-by: Atul Mohan <[email protected]> * Segment backed broadcast join IndexedTable (#10224) * Segment backed broadcast join IndexedTable * fix comments * fix tests * sharing is caring * fix test * i hope this doesnt fix it * filter by schema to maybe fix test * changes * close join stuffs so it does not leak, allow table to directly make selector factory * oops * update comment * review stuffs * better check * Add maxNumFiles to splitHintSpec (#10243) * Add maxNumFiles to splitHintSpec * missing link * fix build failure; use maxNumFiles for integration tests * spelling * lower default * Update docs/ingestion/native-batch.md Co-authored-by: Abhishek Agarwal <[email protected]> * address comments; change default maxSplitSize * spelling * typos and doc * same change for segments splitHintSpec * fix build * fix build Co-authored-by: Abhishek Agarwal <[email protected]> * Add SQL "OFFSET" clause. (#10279) * Add SQL "OFFSET" clause. Under the hood, this uses the new offset features from #10233 (Scan) and #10235 (GroupBy). Since Timeseries and TopN queries do not currently have an offset feature, SQL planning will switch from one of those to Scan or GroupBy if users add an OFFSET. Includes a refactoring to harmonize offset and limit planning using an OffsetLimit wrapper class. This is useful because it ensures that the various places that need to deal with offset and limit collapsing all behave the same way, using its "andThen" method. * Fix test and add another test. * introduce interning of internal files names in SmooshedFileMapper (#10295) * Redis cache extension enhancement (#10240) * support redis cluster * add 'password', 'database' properties * test cases passed * update doc * some improvements * fix CI * add more test cases to improve branch coverage * fix dependency check for test * resolve review comments * Optimize large InDimFilters (#10312) * Optimize large InDimFilters For large InDimFilters, in default mode, the filter does a linear check of the set to see if it contains either an empty or null. If it does, the empties are converted to nulls by passing through the entire list again. Instead of this, in default mode, we attempt to remove an empty string from the values that are passed to the InDimFilter. If an empty string was removed, we add null to the set * code review * Revert "code review" This reverts commit 61fe33ebf762764bb89108ddd966937f3313be71. * code review - less brittle * ExpressionFilter: Use index for expressions of single multi-value columns. (#10320) Previously, this was disallowed, because expressions treated multi-values as nulls. But now, if there's a single multi-value column that can be mapped over, it's okay to use the index. Expression selectors already do this. * Clarify SQL behavior for multi-value dimensions. (#10276) There are some known inconsistencies between SQL and native that users should be aware of. * Remove NUMERIC_HASHING_THRESHOLD (#10313) * Make NUMERIC_HASHING_THRESHOLD configurable Change the default numeric hashing threshold to 1 and make it configurable. Benchmarks attached to this PR show that binary searches are not more faster than doing a set contains check. The attached flamegraph shows the amount of time a query spent in the binary search. Given the benchmarks, we can expect to see roughly a 2x speed up in this part of the query which works out to ~ a 10% faster query in this instance. * Remove NUMERIC_HASHING_THRESHOLD * Remove stale docs * refactor internal type system (#9638) * better type tracking: add typed postaggs, finalized types for agg factories * more javadoc * adjustments * transition to getTypeName to be used exclusively for complex types * remove unused fn * adjust * more better * rename getTypeName to getComplexTypeName * setup expression post agg for type inference existing * more javadocs * fixup * oops * more test * more test * more comments/javadoc * nulls * explicitly handle only numeric and complex aggregators for incremental index * checkstyle * more tests * adjust * more tests to showcase difference in behavior * timeseries longsum array * Handle internal kinesis sequence numbers when reporting lag (#10315) * Handle internal kinesis sequence numbers when reporting lag * add unit test * Adding supported compression formats for native batch ingestion (#10306) * Adding supported compression formats for native batch ingestion * Update docs/ingestion/native-batch.md Co-authored-by: sthetland <[email protected]> * fix spellcheck Co-authored-by: Suneet Saldanha <[email protected]> Co-authored-by: sthetland <[email protected]> * Add support for all partitioing schemes for auto compaction (#10307) * Add support for all partitioing schemes for auto compaction * annotate last compaction state for multi phase parallel indexing * fix build and tests * test * better home * Fix handling of 'join' on top of 'union' datasources. (#10318) * Fix handling of 'join' on top of 'union' datasources. The problem is that unions are typically rewritten into a series of individual queries on the underlying tables, but this isn't done when the union is wrapped in a join. The main changes are in UnionQueryRunner: 1) Replace an instanceof UnionQueryRunner check with DataSourceAnalysis. 2) Replace a "query.withDataSource" call with a new function, "Queries.withBaseDataSource". Together, these enable UnionQueryRunner to "see through" a join. * Tests. * Adjust heap sizes for integration tests. * Different approach, more tests. * Tweak. * Styling. * Move tools for indexing to TaskToolbox instead of injecting them in constructor (#10308) * Move tools for indexing to TaskToolbox instead of injecting them in constructor * oops, other changes * fix test * unnecessary new file * fix test * fix build * SQL support for union datasources. (#10324) * SQL support for union datasources. Exposed via the "UNION ALL" operator. This means that there are now two different implementations of UNION ALL: one at the top level of a query that works by concatenating subquery results, and one at the table level that works by creating a UnionDataSource. The SQL documentation is updated to discuss these two use cases and how they behave. Future work could unify these by building support for a native datasource that represents the union of multiple subqueries. (Today, UnionDataSource can only represent the union of tables, not subqueries.) * Fixes. * Error message for sanity check. * Additional test fixes. * Add some error messages. * Remove implied profanity from error messages. (#10270) i.e. WTF, WTH. * split up Expr.java (#10333) * Web console: add tile for Azure Event Hubs (via Kafka API) (#10317) * Add Azure Event Hubs * better note * update icon * add link to Docker quickstart in github README (#10299) Per suggestion in comment https://github.com/apache/druid/pull/9262#issuecomment-675732237, I think this should eventually result in the copy mirrored on dockerhub to also be updated, if I understand how things work. Only the github `README.md` has been updated, not the `README.template` used for src and bin packages because presumably if you are reading from either of those you are just going to run locally and so the local quickstart is appropriate. * optimize announceHistoricalSegments (#9935) * optimize announceHistoricalSegment * optimize announceHistoricalSegment * revert offline SegmentTransactionalInsertAction uses a separate lock * optimize segmentExistsBatch: Avoid too many elements in the in condition * add unit test && Modified according to cr Co-authored-by: xiangqiao <[email protected]> * Fix VARIANCE aggregator comparator (#10340) * Fix VARIANCE aggregator comparator The comparator for the variance aggregator used to compare values using the count. This is now fixed to compare values using the variance. If the variance is equal, the count and sum are used as tie breakers. * fix tests + sql compatible mode * code review * more tests * fix last test * Add missing comma between JSON members in data-formats.md (#10343) * StringFirstAggregatorFactory: Fix incorrect "combine" method. (#10351) * StringFirstAggregatorFactory: Fix incorrect "combine" method. There was a test, but it was wrong. * Fix superclass. * fix NPE in StringGroupByColumnSelectorStrategy#bufferComparator (#10325) * fix NPE in StringGroupByColumnSelectorStrategy#bufferComparator * Add tests * javadocs * Ignore CVEs from htrace and ambari transitive deps (#10353) * Ignore CVEs from htrace and ambari transitive deps htrace CVEs are suppressed for now as addressing them requires updating the hadoop version. ambari CVEs are suppressed for now since ambari is updated to the latest version and is no longer actively maintained. * Fix compilation issue from ambari upgrade * Add missing test coverage * Fix result-level caching (#10341) * create baseSequence early * unit test * add comment and a new test * Fix stringFirst/stringLast rollup during ingestion (#10332) * Add IndexMergerRollupTest This changelist adds a test to merge indexes with StringFirst/StringLast aggregator. * Fix StringFirstAggregateCombiner/StringLastAggregateCombiner The segment-level type for stringFirst/stringLast is SerializablePairLongString, not String. This changelist fixes it. * Fix EarliestLatestAnySqlAggregator to handle COMPLEX type This changelist allows EarliestLatestAnySqlAggregator to accept COMPLEX type as an operand. For its return type, we set it to VARCHAR, since COMPLEX column is only generated by stringFirst/stringLast during ingestion rollup. * Return value with smaller timestamp in StringFirstAggregatorFactory.combine function * Add integration tests for stringFirst/stringLast during ingestion * Use one EarliestLatestReturnTypeInference instance Co-authored-by: Joy Kent <[email protected]> * Add vectorization for druid-histogram extension (#10304) * First draft * Remove redundant code from FixedBucketsHistogramAggregator classes * Add test cases for new classes * Fix tests in sql compatible mode * Typo fix * Fix comment * Add spelling * Vectorize only for supported types * Rename internal aggregator files * Fix tests * Fix doc for name of dynamic config to pause coordination (#10345) * Unit tests fail due to missing extend InitializedNullHandlingTest (#10382) * CsvInputFormatTest should extend InitializedNullHandlingTest * FirehoseFactoryToInputSourceAdaptorTest should extends InitializedNullHandlingTest * More structured way to handle parse exceptions (#10336) * More structured way to handle parse exceptions * checkstyle; add more tests * forbidden api; test * address comment; new test * address review comments * javadoc for parseException; remove redundant parseException in streaming ingestion * fix tests * unnecessary catch * unused imports * appenderator test * unused import * Fix typo (#10385) * Web console: improve query manager (convert to React hook) (#10360) * Better query running * update licenses * update tests * updated tests v2 * fade in cancel * add exemplary tests * update mkcomp * fix inconsistent state update * remove lastParsedQuery * work if not a valid literal * remove unused params * fix licenses * better state update * get error message * isEmpty tidy * add tests around error message highlighting * pull live query selector into a component * add LiveQueryModeSelector tests * update snapshots * TransformSpecTest should extends InitializedNullHandlingTest (#10392) * Support SearchQueryDimFilter in sql via new methods (#10350) * Support SearchQueryDimFilter in sql via new methods * Contains is a reserved word * revert unnecessary change * Fix toDruidExpression method * rename methods * java docs * Add native functions * revert change in dockerfile * remove changes from dockerfile * More tests * travis fix * Handle null values better * benchmark for indexed table experiments (#10327) * benchmark for indexed table experiments * fix style * teardown outside of measurement * add computed Expr output types (#10370) * push down ValueType to ExprType conversion, tidy up * determine expr output type for given input types * revert unintended name change * add nullable * tidy up * fixup * more better * fix signatures * naming things is hard * fix inspection * javadoc * make default implementation of Expr.getOutputType that returns null * rename method * more test * add output for contains expr macro, split operation and function auto conversion * allow vectorized query engines to utilize vectorized virtual columns (#10388) * allow vectorized query engines to utilize vectorized virtual column implementations * javadoc, refactor, checkstyle * intellij inspection and more javadoc * better * review stuffs * fix incorrect refactor, thanks tests * minor adjustments * Vectorized ANY aggregators (#10338) * WIP vectorized ANY aggregators * tests * fix aggs * cleanup * code review + tests * docs * use NilVectorSelector when needed * fix spellcheck * dont instantiate vectors * cleanup * Skip coverage check for tag builds (#10397) The code coverage diff calculation assumes the TRAVIS_BRANCH environment variable is the name of a branch; however, for tag builds it is the name of the tag so the diff calculation fails. Since builds triggered by tags do not have a code diff, the coverage check should be skipped to avoid the error and to save some CI resources. * Web console: Improve number alignment in tables (#10389) * Improve tables * removed unused state interfaces * better copy * one more functional component * updated e2e tests * extract braced text correctly * Integration tests and docs for auto compaction with different partitioning (#10354) * Working * add test * doc * fix test * split other integration test * exclude other-index from other tests * doc anchor fix * adjust task slots and number of merge tasks * spell check * reduce maxNumConcurrentSubTasks to 1 * maxNumConcurrentSubtasks for range partitinoing * reduce memory for historical * change group name * Support combining inputsource for parallel ingestion (#10387) * Add combining inputsource * Fix documentation Co-authored-by: Atul Mohan <[email protected]> * Disable sending server version in response headers (#9832) * Toggle sending of server version * Remove config Co-authored-by: Atul Mohan <[email protected]> * recreate the balancer executor only when needed (#10280) * recreate the balancer executor only when needed * fix UT error * shutdown the balancer executor in stopBeingLeader and stop * remove commented code * remove comments * Vectorized variance aggregators (#10390) * wip vectorize * close but not quite * faster * unit tests * fix complex types for variance * Adding more dimensions to the audit log entry (#10373) * Adding more dimensions to the audit log entry * Making adding payload in audit metric optional * Changing the name of the parameter to includePayloadAsDimensionInMetric. Adding a unit test * Fixing the intellij code introspection issues * Adding the missing sqlQueryContext api (#10368) * Adding the missing sqlQueryContext api * Adding a serialization test for DefaultRequestLogEvent * Fixing the unit test failure * Remove JODA Time Dependency from Avro Extensions (#10010) * Avoid large limits causing int overflow in buffer size checks (#10356) * Avoid large limits causing int overflow in buffer size checks * fix lgtm overflow warning Co-authored-by: Dylan <[email protected]> * Upgrade ORC to 1.5.10 version (#10291) * Auto-compaction snapshot status API (#10371) * Auto-compaction snapshot API * Auto-compaction snapshot API * Auto-compaction snapshot API * Auto-compaction snapshot API * Auto-compaction snapshot API * Auto-compaction snapshot API * Auto-compaction snapshot API * fix when not all compacted segments are iterated * add unit tests * add unit tests * add unit tests * add unit tests * add unit tests * add unit tests * add some tests to make code cov happy * address comments * address comments * address comments * address comments * make code coverage happy * address comments * address comments * address comments * address comments * Document change in results of groupBy queries with subtotalsSpec (#10405) * subtotalsSpec results with null values Document the format change in results of a groupBy query with a subtotalsSpec. This update applies to 0.18 and later. * Review catches * Web console: fix lookup edit dialog, allow column renaming (#10406) * column rename * update licenses file * remove empty file * update license file * move comment * Issue fix for CSV loading with header and skip header not parsing well. (#10398) * Web console: clean up styling imports (#10410) * fix styling for importing * fix quotes * Web console: add sort to tiers list (#10416) * add sort to tiers list * update snapshot * Include Sequence-building time in CPU time metric. (#10377) * Include Sequence-building time in CPU time metric. Meaningful work can be done while building Sequences, and we should count this work. On the Broker, this includes subquery processing work done by the mergeResults call of the GroupByQueryQueryToolChest. * Add test. * Web console: compaction dialog update (#10417) * compaction dialog update * fix test snapshot * Update web-console/src/dialogs/compaction-dialog/compaction-dialog.tsx Co-authored-by: Chi Cao Minh <[email protected]> * Update web-console/src/dialogs/compaction-dialog/compaction-dialog.tsx Co-authored-by: Chi Cao Minh <[email protected]> * feedback changes Co-authored-by: Chi Cao Minh <[email protected]> * vectorized expressions and expression virtual columns (#10401) * vectorized expression virtual columns * cleanup * fixes * preserve float if explicitly specified * oops * null handling fixes, more tests * what is an expression planner? * better names * remove unused method, add pi * move vector processor builders into static methods * reduce boilerplate * oops * more naming adjustments * changes * nullable * missing hex * more * Add last_compaction_state to sys.segments table (#10413) * Add is_compacted to sys.segments table * change is_compacted to last_compaction_state * fix tests * fix tests * address comments * add light weight version of /druid/coordinator/v1/lookups/nodeStatus (#10422) * add light weight version /druid/coordinator/v1/lookups/nodeStatus * review stuffs * better query view initial state (#10431) * Automatically determine numShards for parallel ingestion hash partitioning (#10419) * Automatically determine numShards for parallel ingestion hash partitioning * Fix inspection, tests, coverage * Docs and some PR comments * Adjust locking * Use HllSketch instead of HyperLogLogCollector * Fix tests * Address some PR comments * Fix granularity bug * Small doc fix * Store hash partition function in dataSegment and allow segment pruning only when hash partition function is provided (#10288) * Store hash partition function in dataSegment and allow segment pruning only when hash partition function is provided * query context * fix tests; add more test * javadoc * docs and more tests * remove default and hadoop tests * consistent name and fix javadoc * spelling and field name * default function for partitionsSpec * other comments * address comments * fix tests and spelling * test * doc * Web console autocompaction E2E test (#10425) Add an E2E test for the common case web console workflow of setting up autocompaction that changes the partitions from dynamic to hashed. Also fix an issue with the async test setup to properly wait for the web console to be ready. * vectorize remaining math expressions (#10429) * vectorize remaining math expressions * fixes * remove cannotVectorize() where no longer true * disable vectorized groupby for numeric columns with nulls * fixes * more timeout handling in JsonParserIterator (#10426) * add docs for kinesis lag metrics (#10435) * fix typo in docker/druid.sh (#10433) DRUID_NEWSIZE should not set MaxNewSize. * Add intent for web console IntervalInput (#10447) When using the web console to load data by reindexing from Druid, the `Datasource` and `Interval` inputs are required during the `Connect` step. Unlike the `Datasource` input, the `Interval` input did not have a blue outline to indicate that it was required as the `IntervalInput` component did not support an `intent` property. * Compaction config UI optional numShards (#10446) * Compaction config UI optional numShards Specifying `numShards` for hashed partitions is no longer required after https://github.com/apache/druid/pull/10419. Update the UI to make `numShards` an optional field for hash partitions. * Update snapshot * add vectorizeVirtualColumns query context parameter (#10432) * add vectorizeVirtualColumns query context parameter * oops * spelling * default to false, more docs * fix test * fix spelling * Remove Expr.visit. (#10437) * Remove Expr.visit. It isn't used and doesn't have tests. * Remove Visitor too. * Web console: Display compaction status (#10438) * init compaction status * % compacted * final UI tweaks * extracted utils, added tests * add tests to general foramt functions * Adding task slot count metrics to Druid Overlord (#10379) * Adding more worker metrics to Druid Overlord * Changing the nomenclature from worker to peon as that represents the metrics that we want to monitor better * Few more instance of worker usage replaced with peon * Modifying the peon idle count logic to only use eligible workers available capacity * Changing the naming to task slot count instead of peon * Adding some unit test coverage for the new test runner apis * Addressing Review Comments * Modifying the TaskSlotCountStatsProvider apis so that overlords which are not leader do not emit these metrics * Fixing the spelling issue in the docs * Setting the annotation Nullable on the TaskSlotCountStatsProvider methods * RowBasedIndexedTable: Add specialized index types for long keys. (#10430) * RowBasedIndexedTable: Add specialized index types for long keys. Two new index types are added: 1) Use an int-array-based index in cases where the difference between the min and max values isn't too large, and keys are unique. 2) Use a Long2ObjectOpenHashMap (instead of the prior Java HashMap) in all other cases. In addition: 1) RowBasedIndexBuilder, a new class, is responsible for picking which index implementation to use. 2) The IndexedTable.Index interface is extended to support using unboxed primitives in the unique-long-keys case, and callers are updated to use the new functionality. Other key types continue to use indexes backed by Java HashMaps. * Fixup logic. * Add tests. * vectorize constant expressions with optimized selectors (#10440) * Web console: switch to switches instead of checkboxes (#10454) * switch to switches * add img alt * add relative * change icons * update snapshot * Fix the offset setting in GoogleStorage#get (#10449) * Fix the offset in get of GCP object * upgrade compute dependency * fix version * review comments * missed * Fix the task id creation in CompactionTask (#10445) * Fix the task id creation in CompactionTask * review comments * Ignore test for range partitioning and segment lock * Web console reindexing E2E test (#10453) Add an E2E test for the web console workflow of reindexing a Druid datasource to change the secondary partitioning type. The new test changes dynamic to single dim partitions since the autocompaction test already does dynamic to hashed partitions. Also, run the web console E2E tests in parallel to reduce CI time and change naming convention for test datasources to make it easier to map them to the corresponding test run. Main changes: 1) web-consolee2e-tests/reindexing.spec.ts - new E2E test 2) web-console/e2e-tests/component/load-data/data-connector/reindex.ts - new data loader connector for druid input source 3) web-console/e2e-tests/component/load-data/config/partition.ts - move partition spec definitions from compaction.ts - add new single dim partition spec definition * Fix UI datasources view edit action compaction (#10459) Restore the web console's ability to view a datasource's compaction configuration via the "action" menu. Refactoring done in https://github.com/apache/druid/pull/10438 introduced a regression that always caused the default compaction configuration to be shown via the "action" menu instead. Regression test is added in e2e-tests/auto-compaction.spec.ts. * Allow using jsonpath predicates with AvroFlattener (#10330) * Improve UI E2E test usability (#10466) - Update playwright to latest version - Provide environment variable to disable/enable headless mode - Allow running E2E tests against any druid cluster running on standard ports (tutorial-batch.spec.ts now uses an absolute instead of relative path for the input data) - Provide environment variable to change target web console port - Druid setup does not need to download zookeeper * Web console: fix lookup edit dialog version setting (#10461) * fix lookup edit dialog * update snapshots * clean up test * fix array types from escaping into wider query engine (#10460) * fix array types from escaping into wider query engine * oops * adjust * fix lgtm * Update version to 0.21.0-SNAPSHOT (#10450) * [maven-release-plugin] prepare release druid-0.21.0 * [maven-release-plugin] prepare for next development iteration * Update web-console versions * Test UI to trigger auto compaction (#10469) In the web console E2E tests, Use the new UI to trigger auto compaction instead of calling the REST API directly so that the UI is covered by tests. * adjustments to Kafka integration tests to allow running against Azure Event Hubs streams (#10463) * adjustments to kafka integration tests to allow running against azure event hubs in kafka mode * oops * make better * more better * vectorized group by support for nullable numeric columns (#10441) * vectorized group by support for numeric null columns * revert unintended change * adjust * review stuffs * Close aggregators in HashVectorGrouper.close() (#10452) * Close aggregators in HashVectorGrouper.close() * reuse grouper * Add missing dependency * Web console: Don't include realtime segments in size calculations. (#10482) It's always zero, and so it messes up averages, mins, and counts. * Fix compaction task slot computation in auto compaction (#10479) * Fix compaction task slot computation in auto compaction * add tests for task counting * Improve test (#10480) * Web console: fix compaction status when no compaction config, and small cleanup (#10483) * move timed button to icons * cleanup redundant logic * fix compaction status text * remove extra style * Fix Avro support in Web Console (#10232) * Fix Avro OCF detection prefix and run formation detection on raw input * Support Avro Fixed and Enum types correctly * Check Avro version byte in format detection * Add test for AvroOCFReader.sample Ensures that the Sampler doesn't receive raw input that it can't serialize into JSON. * Document Avro type handling * Add TS unit tests for guessInputFormat * Suppress CVE-2018-11765 for hadoop dependencies (#10485) * Update README.md (#10357) Compile scss files before npm start. * …
* better type tracking: add typed postaggs, finalized types for agg factories * more javadoc * adjustments * transition to getTypeName to be used exclusively for complex types * remove unused fn * adjust * more better * rename getTypeName to getComplexTypeName * setup expression post agg for type inference existing * more javadocs * fixup * oops * more test * more test * more comments/javadoc * nulls * explicitly handle only numeric and complex aggregators for incremental index * checkstyle * more tests * adjust * more tests to showcase difference in behavior * timeseries longsum array
Description
This PR begins to refactor the Druid internal column type system in an effort to one day live in a world where we can have complete information about the types of all columns taking part in a query - at all layers, i.e. a complete
RowSignature
andColumnCapabilities
.tl;dr this PR consolidates
ValueType
enums, adds type information toPostAggregator
, and adds finalized type information toAggregatorFactory
.The 2 separate
ValueType
enumerations have been consolidated, and now live indruid-core
. The methods inValueType
which requireddruid-processing
have been migrated into static methods in a newValueTypes
static class, which takes aValueType
and produces amakeNumericWrappingDimensionSelector
ormakeNewSettableColumnValueSelector
as appropriate. There might be a better home for this, perhapsRowSignature
or .. somewhere?PostAggregator
now has a new method:and will now correctly populate a
RowSignature
instead of setting the type tonull
.To support type information on
PostAggregator
,AggregatorFactory
also has 2 new methods:so that finalized type information is available to correctly compute the output type of
FinalizingFieldAccessPostAggregator
. This also means that finalized type information for aggregators could be available in a finalized view of aRowSignature
, but I have not made this modification yet.AggregatorFactory.getTypeName
has been renamed togetComplexTypeName
and transitioned to be exclusively used for complex type serde lookup.Because a lot of postaggs produce
double
typed arrays in their results, and in anticipation of what I think might be a good idea to do next which is mergeValueType
andExprType
, I went ahead and added array types to the new and improvedValueType
enumeration. The complete set is now:This is perhaps controversial and worth discussion, but it also doesn't seem to introduce any ill effects as far as I can tell.
Since this PR adds types for almost every other type of column, we should now be able to have nearly complete type information for all inputs when we are constructing an
Expr
forExpressionVirtualColumn
andExpressionPostAggregator
andExpressionFilter
and anything else usingExpr
. This means that as a follow-up we should also be able to allow anExpr
to compute it's outputValueType
based on its inputs, making explicit user specification no longer necessary. I suspect we could also use it for type specific optimizations to help make expressions faster, and more.This PR has:
Key changed/added classes in this PR
ValueType
+ValueType
=ValueType
RowSignature
PostAggregator
andAggregatorFactory