I’ve got two documents that I want to join in n1ql query. One document is JSON schema and the other is binary key/value.
JSON type:
key: 235234_meta
value:
{
"id": 235234,
"title": "Something..."
}
Key/Value type:
key: pageView_235234_metric
value: 49
I want a query that returns something like this, sorted by the “metric” value:
[
{
"id": 235234,
"title": "Something...",
"pageViews": 49
},
{
"id": 535234,
"title": "Else...",
"pageViews": 42
}
]
I’ve tried this approach but my subquery doesn’t seem to be working:
SELECT *
FROM mybucket p
WHERE EXISTS(
SELECT *
FROM mybucket
USE KEYS "pageView_" || TOSTRING(p.id) || "_metric"
);
1 Like
Use this. If the performance does not meet your needs, we can rewrite the query to use an index for the ORDER BY.
SELECT m.id, m.title, p AS pageViews
FROM mybucket m JOIN mybucket p ON KEYS "pageView_" || TOSTRING(m.id) || "_metric"
ORDER BY p;
Not getting any results with that. I think there’s something wrong with the concatenation in the ON KEYS
clause, because even when I run a simple query with a hard-coded number I get no results:
cbq> select * from mybucket use keys "pageView_" || TOSTRING(93598616) || "_metric";
{
"requestID": "ed82f006-fe89-4b0a-a4ce-b3a65a608eda",
"signature": {
"*": "*"
},
"results": [
],
"status": "success",
"metrics": {
"elapsedTime": "1.657958ms",
"executionTime": "1.609221ms",
"resultCount": 0,
"resultSize": 0
}
}
The same query without the concatenation works fine though:
cbq> select * from mybucket use keys "pageView_93598616_metric";
{
"requestID": "05d77a76-df06-4404-b503-417c942fbe1d",
"signature": {
"*": "*"
},
"results": [
{
"mybucket": 6
}
],
"status": "success",
"metrics": {
"elapsedTime": "1.065421ms",
"executionTime": "1.025507ms",
"resultCount": 1,
"resultSize": 33
}
}
Is the syntax with the concatenation wrong?
This also works, so perhaps it’s a problem with type converting the numeric “id” to a string?
select * from mybucket use keys "pageView_" || "93598616" || "_metric";
Can you post the results of this.
SELECT TOSTRING(m.id) AS id, "pageView_" || TOSTRING(m.id) || "_metric" AS pageView
FROM mybucket AS m
WHERE m.id IS NOT NULL;
cbq> SELECT TOSTRING(m.id) AS id, "pageView_" || TOSTRING(m.id) || "_metric" AS pageView
> FROM mybucket
> WHERE m.id IS NOT NULL;
{
"requestID": "21021fc3-4745-402f-bff6-32bcba4964bb",
"signature": {
"id": "string",
"pageView": "string"
},
"results": [
],
"status": "success",
"metrics": {
"elapsedTime": "17.457039ms",
"executionTime": "17.406799ms",
"resultCount": 0,
"resultSize": 0
}
}
Sorry, there was a typo in my query. Please run my edited query.
{
"requestID": "0a820a4f-42e7-4b5e-acd7-e369fe979e6e",
"signature": {
"id": "string",
"pageView": "string"
},
"results": [
{
"id": "9.3633154e+07",
"pageView": "pageView_9.3633154e+07_metric"
},
{
"id": "9.3598616e+07",
"pageView": "pageView_9.3598616e+07_metric"
},
{
"id": "9.3637864e+07",
"pageView": "pageView_9.3637864e+07_metric"
}
],
"status": "success",
"metrics": {
"elapsedTime": "15.934513ms",
"executionTime": "15.888204ms",
"resultCount": 3,
"resultSize": 306
}
}
Perhaps it’s something about how the id seems to be stored in exponential format?
Am I correct in understanding the only way to solve this is to upgrade to 4.1?
It is not stored in exponential format, but the query engine is processing numbers as floating point. You need to upgrade to 4.5.1.
4.1.1 might work if the numbers don’t exceed certain limits, but 4.5.1 will handle all integers within range.