Skip to content

Commit 2ec6384

Browse files
openai-embeddings: enable logs, metrics and traces with EDOT Node.js (#375)
Signed-off-by: Adrian Cole <[email protected]>
1 parent c8f8723 commit 2ec6384

File tree

7 files changed

+64
-27
lines changed

7 files changed

+64
-27
lines changed

example-apps/chatbot-rag-app/docker-compose.yml

Lines changed: 5 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -5,17 +5,14 @@ services:
55
build:
66
context: .
77
container_name: ingest-data
8-
restart: 'no'
8+
restart: 'no' # no need to re-ingest on successive runs
99
environment:
10-
# host.docker.internal means connect to the host machine, e.g. your laptop
11-
ELASTICSEARCH_URL: "http://host.docker.internal:9200"
12-
OTEL_EXPORTER_OTLP_ENDPOINT: "http://host.docker.internal:8200"
1310
FLASK_APP: api/app.py
1411
env_file:
1512
- .env
1613
command: flask create-index
17-
extra_hosts:
18-
- "host.docker.internal:host-gateway"
14+
extra_hosts: # send localhost traffic to the docker host, e.g. your laptop
15+
- "localhost:host-gateway"
1916

2017
api-frontend:
2118
depends_on:
@@ -24,13 +21,9 @@ services:
2421
container_name: api-frontend
2522
build:
2623
context: .
27-
environment:
28-
# host.docker.internal means connect to the host machine, e.g. your laptop
29-
ELASTICSEARCH_URL: "http://host.docker.internal:9200"
30-
OTEL_EXPORTER_OTLP_ENDPOINT: "http://host.docker.internal:8200"
3124
env_file:
3225
- .env
3326
ports:
3427
- "4000:4000"
35-
extra_hosts:
36-
- "host.docker.internal:host-gateway"
28+
extra_hosts: # send localhost traffic to the docker host, e.g. your laptop
29+
- "localhost:host-gateway"

example-apps/openai-embeddings/Dockerfile

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,5 +8,9 @@ RUN --mount=type=cache,target=/root/.npm \
88
USER node
99
EXPOSE 3000
1010

11+
# Default to disabling instrumentation, can be overridden to false in
12+
# docker invocations to reenable.
13+
ENV OTEL_SDK_DISABLED=true
14+
1115
ENTRYPOINT ["npm", "run"]
1216
CMD ["app"]

example-apps/openai-embeddings/README.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -109,6 +109,22 @@ npm run app
109109
Here are some tips for modifying the code for your use case. For example, you
110110
might want to use your own sample data.
111111

112+
### OpenTelemetry
113+
114+
If you set `OTEL_SDK_DISABLED=false` in your `.env` file, the app will send
115+
logs, metrics and traces to an OpenTelemetry compatible endpoint.
116+
117+
[env.example](env.example) defaults to use Elastic APM server, started by
118+
[docker-compose-elastic.yml](docker-compose-elastic.yml). If you start your
119+
Elastic stack this way, you can access Kibana like this, authenticating with
120+
the username "elastic" and password "elastic":
121+
122+
http://localhost:5601/app/apm/traces?rangeFrom=now-15m&rangeTo=now
123+
124+
Under the scenes, openai-embeddings is automatically instrumented by the Elastic
125+
Distribution of OpenTelemetry (EDOT) Node.js. You can see more details about
126+
EDOT Node.js [here](https://github.com/elastic/elastic-otel-node).
127+
112128
### Using a different source file or document mapping
113129

114130
- Ensure your file contains the documents in JSON format

example-apps/openai-embeddings/docker-compose.yml

Lines changed: 5 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -5,15 +5,12 @@ services:
55
build:
66
context: .
77
container_name: generate
8-
restart: 'no'
9-
environment:
10-
# host.docker.internal means connect to the host machine, e.g. your laptop
11-
ELASTICSEARCH_URL: "http://host.docker.internal:9200"
8+
restart: 'no' # no need to re-ingest on successive runs
129
env_file:
1310
- .env
1411
command: generate
15-
extra_hosts:
16-
- "host.docker.internal:host-gateway"
12+
extra_hosts: # send localhost traffic to the docker host, e.g. your laptop
13+
- "localhost:host-gateway"
1714

1815
app:
1916
depends_on:
@@ -22,12 +19,9 @@ services:
2219
container_name: api-frontend
2320
build:
2421
context: .
25-
environment:
26-
# host.docker.internal means connect to the host machine, e.g. your laptop
27-
ELASTICSEARCH_URL: "http://host.docker.internal:9200"
2822
env_file:
2923
- .env
3024
ports:
3125
- "3000:3000"
32-
extra_hosts:
33-
- "host.docker.internal:host-gateway"
26+
extra_hosts: # send localhost traffic to the docker host, e.g. your laptop
27+
- "localhost:host-gateway"

example-apps/openai-embeddings/env.example

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,3 +14,22 @@ OPENAI_API_KEY=
1414
# OPENAI_BASE_URL=http://localhost:11434/v1
1515
# OPENAI_API_KEY=unused
1616
# EMBEDDINGS_MODEL=all-minilm:33m
17+
18+
# Set to false, if you want to record logs, traces and metrics.
19+
OTEL_SDK_DISABLED=true
20+
21+
# Assign the service name that shows up in Kibana
22+
OTEL_SERVICE_NAME=chatbot-rag-app
23+
24+
# Default to send traces to the Elastic APM server
25+
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:8200
26+
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
27+
28+
# Export metrics every 3 seconds instead of every minute
29+
OTEL_METRIC_EXPORT_INTERVAL=3000
30+
OTEL_METRIC_EXPORT_TIMEOUT=3000
31+
# Export traces every 3 seconds instead of every 5 seconds
32+
OTEL_BSP_SCHEDULE_DELAY=3000
33+
# Change to affect behavior of which resources are detected. Note: these
34+
# choices are specific to the runtime, in this case Node.js.
35+
OTEL_NODE_RESOURCE_DETECTORS=container,env,host,os,serviceinstance,process,alibaba,aws,azure

example-apps/openai-embeddings/generate_embeddings.js

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
const { trace } = require("@opentelemetry/api");
12
const fs = require("fs");
23
const {
34
getElasticsearchClient,
@@ -11,6 +12,8 @@ const {
1112
const elasticsearchClient = getElasticsearchClient();
1213
const openaiClient = getOpenAIClient();
1314

15+
const tracer = trace.getTracer("openai-embeddings");
16+
1417
async function maybeCreateIndex() {
1518
// Check if index exists, if not create it
1619
indexExists = await elasticsearchClient.indices.exists({
@@ -119,8 +122,14 @@ async function processFile() {
119122
}
120123

121124
async function run() {
122-
await maybeCreateIndex();
123-
await processFile();
125+
return tracer.startActiveSpan("generate", async (span) => {
126+
try {
127+
await maybeCreateIndex();
128+
await processFile();
129+
} finally {
130+
span.end();
131+
}
132+
});
124133
}
125134

126135
run().catch(console.error);

example-apps/openai-embeddings/package.json

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,13 @@
77
"node": ">=20"
88
},
99
"scripts": {
10-
"app": "node --env-file .env search_app.js",
11-
"generate": "node --env-file .env generate_embeddings.js"
10+
"app": "node --env-file .env -r @elastic/opentelemetry-node search_app.js",
11+
"generate": "node --env-file .env -r @elastic/opentelemetry-node generate_embeddings.js"
1212
},
1313
"dependencies": {
14+
"@elastic/opentelemetry-node": "*",
1415
"@elastic/elasticsearch": "^8.17.0",
16+
"@opentelemetry/api": "^1.9.0",
1517
"express": "^4.21.2",
1618
"hbs": "^4.2.0",
1719
"openai": "^4.78.1"

0 commit comments

Comments
 (0)