chore: deleted env.js
Pinned
Activity
julienvincent push julienvincent/dotfiles
commit sha: b92118270bbda34ec4e8df6e5d1eea50f4d320ac
push time in 2 weeks agojulienvincent in dsyncapp/dsync create published release 0.0.4
julienvincent push dsyncapp/dsync
commit sha: 7c6d0d4a1510391e93c259ab1417d4163e630865
push time in 3 weeks agojulienvincent push dsyncapp/dsync
commit sha: 0a264ceb18c7cebac1b024ddd42183904d6698cc
push time in 3 weeks agojulienvincent pull request ohmyzsh/ohmyzsh
Added vars for setting vi-mode cursor styles
Standards checklist:
- The PR title is descriptive.
- The PR doesn't replicate another PR which is already open. - not that I could see
- I have read the contribution guide and followed all the instructions.
- The code follows the code style guide detailed in the wiki.
- The code is mine or it's from somewhere with an MIT-compatible license.
- The code is efficient, to the best of my ability, and does not waste computer resources.
- The code is stable and I have tested it myself, to the best of my abilities.
Changes:
Added the following vi-mode variables which can be used to control the cursor style depending on the vi mode currently active.
- VI_MODE_CURSOR_NORMAL
- VI_MODE_CURSOR_VISUAL
- VI_MODE_CURSOR_INSERT
- VI_MODE_CURSOR_OPPEND
These default to the values used prior to this change and only apply if the existing variable VI_MODE_SET_CURSOR=true
.
I have added this functionality after copy-pasting the vi-mode
plugin to my custom dir and changing the default cursor styles. I really like a cursor indicator to help me keep track of what mode I am in, but I much prefer the underline when in insert mode.
julienvincent push julienvincent/ohmyzsh
commit sha: 0ffcfadcffa4bfad98fddbdce55dd78bfd16ae8a
push time in 1 month agojulienvincent push julienvincent/ohmyzsh
commit sha: 5ccace207a944c0ad6a1585ed548c8220b2fc380
push time in 1 month agojulienvincent in julienvincent/ohmyzsh create branch feature/vi-mode-cursors
julienvincent forked ohmyzsh/ohmyzsh
julienvincent push julienvincent/dotfiles
commit sha: 62c8a297cf35bb25aa7ffe61e469178955b3fe83
push time in 1 month agojulienvincent merge to journeyapps-platform/delete-old-packages
[Fix] Delete package query
Simply changing to !ID
did not solve the type issue:
UnhandledPromiseRejectionWarning: GraphqlError: Type mismatch on variable $package_id and argument packageVersionId (String! / ID!)
See failed run
We are POST
'ing to the endpoint itself, which might get rate-limited
Tested on in runtime-components using 1.0.3-dev1
See run
julienvincent merge to journeyapps-platform/delete-old-packages
[Fix] Delete package query
Simply changing to !ID
did not solve the type issue:
UnhandledPromiseRejectionWarning: GraphqlError: Type mismatch on variable $package_id and argument packageVersionId (String! / ID!)
See failed run
We are POST
'ing to the endpoint itself, which might get rate-limited
Tested on in runtime-components using 1.0.3-dev1
See run
julienvincent merge to journeyapps-platform/delete-old-packages
[Fix] Delete package query
Simply changing to !ID
did not solve the type issue:
UnhandledPromiseRejectionWarning: GraphqlError: Type mismatch on variable $package_id and argument packageVersionId (String! / ID!)
See failed run
We are POST
'ing to the endpoint itself, which might get rate-limited
Tested on in runtime-components using 1.0.3-dev1
See run
julienvincent issue comment tulios/kafkajs
#683 Improve concurrency
Notable changes:
- Removed current concurrency + barrier logic for batch processing
- Created RunnerPool, which spawns
partitionsConsumedConcurrently
number of independent runners - Created FetchManager, which fetches different brokers independently and assigns partitions to different runners during the consumer group sync
- Each connection now opens two sockets - one for fetch requests, the other one for the rest (TCP sockets preserve the order of messages sent to a broker. That means that other concurrent requests, eg. heartbeats and offset fetch/commit, from other runners would wait behind the fetch polls and drastically slows down the consumption in the concurrent scenario)
- Removed memory leaks from hanging consumers in tests
Manually compared consumption times with the scripts below:
Producer:
const { Kafka } = require('kafkajs');
const kafka = new Kafka({ brokers: ['localhost:9092'] });
const sleep = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
(async () => {
// const admin = kafka.admin();
// await admin.connect()
// await admin.deleteTopics({ topics: ['loadtest'] })
// await admin.createTopics({ topics: [{ topic: 'loadtest', numPartitions: 3, replicationFactor: 1 }]})
// await admin.disconnect()
const producer = kafka.producer();
await producer.connect();
const batchSize = 100;
for (let i = 0; i < 5; i++) {
const messages = Array(batchSize)
.fill()
.map((_, index) => ({
key: `key-${batchSize * i + index}`,
value: `value-${batchSize * i + index}`,
}));
await producer.send({ topic: 'loadtest', messages });
await sleep(1000);
}
await producer.disconnect();
console.log('Published!');
})();
Consumer:
const { Kafka } = require('kafkajs');
const kafka = new Kafka({ brokers: ['localhost:9092'] });
const sleep = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
(async () => {
const metrics = {};
const consumer = kafka.consumer({ groupId: 'tulios-consumer', maxWaitTimeInMs: 5000 });
await consumer.connect();
process.on('SIGINT', () => {
Object.entries(metrics).forEach(([partition, { startTime, endTime }]) => {
console.log(`Partition ${partition} processed in ${endTime - startTime} ms`)
})
return consumer.disconnect()
});
await consumer.subscribe({ topic: 'loadtest' });
await consumer.run({
partitionsConsumedConcurrently: 3,
eachMessage: async ({ partition, message: { value } }) => {
if (!metrics[partition]) metrics[partition] = { startTime: Date.now() }
if (partition === 0) await sleep(100)
console.log(partition, value.toString());
metrics[partition].endTime = Date.now()
},
});
})();
Slower runners now consume their batches at their own pace, other fetch requests will not be blocked. Partitions 2 and 3 were consumed in ~70ms (- 4s from delays) compared to the current master version, which took ~9s (-4s delays):
We have not seen any out-of-the-ordinary issues with our deployments running this code, and performance + latency has been very good - as expected!
julienvincent push julienvincent/ts-codec
commit sha: 98b8cbb67285ad41bf67eb2c371752d9e38ab6ee
push time in 2 months agojulienvincent in dsyncapp/dsync create published release 0.0.3
julienvincent push dsyncapp/dsync
commit sha: d9ba227301a12661010357d99360e1e9d866e9bb
push time in 2 months agojulienvincent push dsyncapp/dsync
commit sha: 18964fbc0704c2f80eedc5eb65332975d07d003e
push time in 2 months agojulienvincent in dsyncapp/dsync create published release 0.0.3
julienvincent push dsyncapp/dsync
commit sha: e008f587b09bb04d5cc05e630533577b66fd8bb3
push time in 2 months agojulienvincent push dsyncapp/dsync
commit sha: 1e38d79fce9796cd4c95be7648a1ef291ae9e9ee
push time in 2 months ago
Rewrite cached-authenticator in rust