Skip to content

Commit 35a8ba1

Browse files
committed
update
1 parent 603b864 commit 35a8ba1

File tree

29 files changed

+715
-44
lines changed

29 files changed

+715
-44
lines changed
Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,17 @@
11
---
22
tags:
3-
- Awesome
3+
- Awesome
44
---
55

66
# AI Agent Awesome
77

8-
- https://github.com/a2aproject/A2A
9-
- https://github.com/microsoft/autogen
10-
- https://github.com/DavidZWZ/Awesome-Deep-Research
11-
- https://github.com/siteboon/claudecodeui
12-
- https://github.com/eyaltoledano/claude-task-master
13-
- https://github.com/sugyan/claude-code-webui
8+
- [a2aproject/A2A](https://github.com/a2aproject/A2A)
9+
- [microsoft/autogen](https://github.com/microsoft/autogen)
10+
- [DavidZWZ/Awesome-Deep-Research](https://github.com/DavidZWZ/Awesome-Deep-Research)
11+
- [siteboon/claudecodeui](https://github.com/siteboon/claudecodeui)
12+
- GPLv3, JS, React
13+
- 不能 accept/deny 工具
14+
- [eyaltoledano/claude-task-master](https://github.com/eyaltoledano/claude-task-master)
15+
- MIT, JS, TS
16+
- [sugyan/claude-code-webui](https://github.com/sugyan/claude-code-webui)
17+
- MIT, TS, React

notes/ai/dev/ai-dev-awesome.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -112,6 +112,7 @@ tags:
112112
- npm:ai
113113
- Build AI-powered applications with React, Svelte, Vue, and Solid
114114
- https://ai-sdk.dev/providers/openai-compatible-providers
115+
- https://github.com/vercel/ai/tree/main/packages/provider/src/language-model
115116
- [moeru-ai/xsai](https://github.com/moeru-ai/xsai)
116117
- [openai/openai-agents-js](https://github.com/openai/openai-agents-js)
117118
- MIT, TS

notes/ai/dev/cuda.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,3 +39,10 @@ lsmod | grep nvidia
3939
# reload
4040
nvidia-smi
4141
```
42+
43+
44+
```
45+
rmmod: ERROR: Module nvidia_uvm is in use
46+
```
47+
48+
- 注意关闭 nvtop

notes/ai/dev/ollama.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ title: ollama
1414
- 客户端
1515
- [sgomez/ollama-ai-provider](https://github.com/sgomez/ollama-ai-provider)
1616
- provider for vercel ai
17+
- OpenAI compatibility https://docs.ollama.com/api/openai-compatibility
1718
- https://ollama.ai/library
1819
- 默认地址 http://localhost:11434
1920
- OLLAMA_HOST

notes/ai/model/qwen.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,12 @@ tags:
3636
- 渐进式上下文扩展(progressive context scaling)
3737
- 数据集 [DocQA-RL-1.6K](https://huggingface.co/datasets/Tongyi-Zhiwen/DocQA-RL-1.6K)
3838

39+
## Qwen3 VL
40+
41+
- https://github.com/QwenLM/Qwen3-VL
42+
- 用回了绝对坐标
43+
- https://github.com/QwenLM/Qwen3-VL/issues/1623
44+
3945
## Qwen 3 Embedding
4046

4147
- [QwenLM/Qwen3-Embedding](https://github.com/QwenLM/Qwen3-Embedding)

notes/ai/service/codex.md

Lines changed: 117 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,117 @@
1+
---
2+
title: Codex
3+
tags:
4+
- Agent
5+
- CLI
6+
---
7+
8+
# Codex
9+
10+
- [openai/codex](httpss://github.com/openai/codex)
11+
12+
```bash
13+
npm i -g @openai/codex
14+
brew install codex
15+
16+
codex --version
17+
18+
# OPENAI_API_KEY 第一次会写入 auth.json ,之后需要修改 auth.json
19+
OPENAI_BASE_URL=https://api.openai.com/v1 OPENAI_API_KEY=sk-proj-1234567890 codex --model deepseek-v3.2-exp
20+
21+
# 使用自定义 provider,环境变量定义 env_key
22+
MY_PROVIDER_KEY=xyz codex
23+
# 使用内置 OPENAI provider,环境变量定义 base_url
24+
OPENAI_BASE_URL=xyz codex
25+
26+
# ~/.codex/log/codex-tui.log
27+
RUST_LOG=debug codex
28+
# 会包含 request
29+
RUST_LOG=trace codex
30+
```
31+
32+
## 配置 {#config}
33+
34+
Codex 支持多种设置配置值的机制:
35+
36+
- 特定于配置的命令行标志,例如 `--model o3`(最高优先级)。
37+
- 通用 `-c`/`--config` 标志,接受 `key=value` 对,例如 `--config model="o3"`
38+
- CODEX_HOME=~/.codex
39+
- $CODEX_HOME/config.toml
40+
- 参考
41+
- https://github.com/openai/codex/issues/2760
42+
43+
```toml
44+
model=gpt-5-codex # 模型
45+
model_provider=openai
46+
# 批准策略
47+
# untrusted, on-failure, on-request, never
48+
approval_policy=untrusted
49+
# 沙箱策略
50+
# read-only, workspace-write, danger-full-access
51+
sandbox_mode=read-only
52+
53+
# 启用的 profile
54+
profiles=o3
55+
56+
# MCP 服务器
57+
mcp_servers=mcp-server
58+
59+
# 传递给子进程的环境变量
60+
shell_environment_policy=inherit=all
61+
62+
# 内置 openai provider
63+
[model_providers.openai]
64+
name = "OpenAI"
65+
# OPENAI_BASE_URL
66+
base_url = "https://api.openai.com/v1"
67+
env_key = "OPENAI_API_KEY"
68+
# 使用 v1/chat/completions 还是 v1/responses
69+
# chat, responses
70+
wire_api = "chat"
71+
72+
# 请求配置
73+
query_params = {}
74+
http_headers = { }
75+
76+
env_http_headers = { "OpenAI-Organization" = "OPENAI_ORGANIZATION", "OpenAI-Project" = "OPENAI_PROJECT" }
77+
78+
request_max_retries = 4
79+
stream_max_retries = 5
80+
# 等待 stream 响应的时间
81+
stream_idle_timeout_ms = 300000
82+
83+
[model_providers.ollama]
84+
name = "Ollama"
85+
base_url = "http://localhost:11434/v1"
86+
87+
[model_providers.azure]
88+
name = "Azure"
89+
base_url = "https://YOUR_PROJECT_NAME.openai.azure.com/openai"
90+
env_key = "AZURE_OPENAI_API_KEY"
91+
query_params = { api-version = "2025-04-01-preview" }
92+
wire_api = "responses"
93+
```
94+
95+
| 变量 | 描述 |
96+
| --------------------- | ----------------------------------------------------------- |
97+
| `OPENAI_API_KEY` | 您的 OpenAI API 密钥。 |
98+
| `CODEX_HOME` | 用于日志、配置和其他数据的目录。默认为 `~/.codex`|
99+
| `CODEX_API_KEY` | API 密钥,仅在 `codex exec` 中支持。 |
100+
| `RUST_LOG` | 配置日志记录行为 (例如, `codex_core=info,codex_tui=info`)。 |
101+
| `<PROVIDER>_API_KEY` | 特定提供程序的 API 密钥 (例如, `MISTRAL_API_KEY`)。 |
102+
| `<PROVIDER>_BASE_URL` | 自定义提供程序的基础 URL。 |
103+
| `RUST_LOG` | 配置日志记录行为 (例如, `codex_core=info,codex_tui=info`)。 |
104+
105+
## auth.json
106+
107+
- OPENAI_API_KEY
108+
- tokens
109+
- id_token
110+
- email
111+
- chatgpt_plan_type
112+
- Free, Plus, Pro, Team, Business, Enterprise, Edu
113+
- raw_jwt
114+
- access_token
115+
- refresh_token
116+
- account_id
117+
- last_refresh

notes/culture/nobel-prize.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
---
2+
title: Nobel Prize
3+
---
4+
5+
# Nobel Prize
6+
7+
- since 1901年
8+
- https://www.nobelprize.org/prizes/lists/all-nobel-prizes/
9+
- [诺贝尔奖得主列表](https://zh.wikipedia.org/zh-hans/诺贝尔奖得主列表)
10+
- https://github.com/16131zzzzzzzz/EveryoneNobel

notes/dev/circuit-breaker.md

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ title: 熔断
1111
- Closed -> Open -> Half-Open -> Closed
1212
- Open/打开 - 阻断请求
1313
- HTTP 5XX
14+
- ACA - Automated Canary Analysis
1415
- Java: Resilience4j, Netflix Hystrix (目前已进入维护状态)
1516
- .NET: Polly
1617
- Golang
@@ -28,3 +29,29 @@ title: 熔断
2829
- Apache-2.0, JS
2930
- https://github.com/netflix/hystrix/wiki/how-it-works
3031
- https://learn.microsoft.com/en-us/azure/architecture/patterns/bulkhead
32+
- https://github.com/spinnaker/kayenta
33+
- Deployment, Measurement, Judging, Scoring & Decision
34+
- Mann-Whitney U test
35+
- https://github.com/argoproj/argo-rollouts
36+
37+
```mermaid
38+
stateDiagram-v2
39+
[*] --> CLOSED: Initial State
40+
CLOSED --> OPEN: Failure Threshold Reached
41+
OPEN --> HALF_OPEN: Reset Timeout Elapsed
42+
HALF_OPEN --> CLOSED: Trial Request Succeeds
43+
HALF_OPEN --> OPEN: Trial Request Fails
44+
```
45+
46+
| conf | for | mean |
47+
| ----------------- | ------------ | -------------------------------------------------------------------------------------------------------------- |
48+
| failureThreshold | 失败阈值 | 触发熔断的条件。通常以百分比表示(例如,失败率超过 50%)。 |
49+
| slidingWindowSize | 滑动窗口大小 | 用于计算失败率的样本数量。例如,窗口大小为 100,意味着熔断器会基于最近的 100 次请求来计算失败率。 |
50+
| minimumRequests | 最小请求数 | 在滑动窗口内,必须达到这个请求数,才会开始计算失败率。这可以防止在流量很低时,因为一两次偶然的失败就触发熔断。 |
51+
| resetTimeout | 重置超时时间 | 在 OPEN 状态下停留的时间(毫秒),之后会切换到 HALF-OPEN。 |
52+
| fallback | 降级函数 | 当请求被熔断时执行的函数。可以返回一个缓存的、默认的或者简化的结果。 |
53+
| halfOpenTimeout | 半开请求超时 | 在 HALF-OPEN 状态下,试探请求的超时时间。如果试探请求本身卡住了,也应算作失败。 |
54+
55+
- slide bucket
56+
- 滑动窗口
57+
- 每个窗口独立统计

notes/dev/design/api/design-api-llm.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -110,6 +110,22 @@ tags:
110110
- 参考
111111
- https://docs.together.ai/docs/batch-inference
112112

113+
## Routing
114+
115+
- 价格
116+
- 性能
117+
- TTFT
118+
- TPOT
119+
- E2E Latency
120+
- 可靠性
121+
- Error Rate
122+
- Uptime/Availability
123+
124+
```
125+
Find Upstream U that minimizes Score(U)
126+
Score(U) = w_cost * P(U) + w_latency * L(U) + w_reliability * R(U)
127+
```
128+
113129
## Mock
114130

115131
- https://exampleopenaiendpoint-production.up.railway.app/

notes/dev/dict.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -197,7 +197,6 @@ tags:
197197
## 俚语 {#slang}
198198

199199
- [bruh](https://www.urbandictionary.com/define.php?term=Bruh)
200-
201200
- best answer to literally anything
202201
- 嘲笑别人的问题很傻
203202

@@ -2292,6 +2291,13 @@ try to get the other instance of the resource. In the unfortunate case it might
22922291
- 竞争对手定价 (Competition):需要参考同类产品的市场价格。
22932292
- 消费者感知价值 (Perceived Value):消费者认为这个产品值多少钱。
22942293

2294+
## chat vs conversation
2295+
2296+
- chat - 聊天
2297+
- 强调即时性、互动性
2298+
- conversation - 对话
2299+
- 强调上下文、完整性
2300+
22952301
## 常见读音错误
22962302

22972303
> 文字只是工具

0 commit comments

Comments
 (0)