diff options
| author | Franck Cuny <franck@fcuny.net> | 2024-12-26 19:01:18 -0800 |
|---|---|---|
| committer | Franck Cuny <franck@fcuny.net> | 2024-12-26 19:01:18 -0800 |
| commit | bf56a9edfcca610bc771e0176f72bbce59fcc87a (patch) | |
| tree | 382908e01dee4992a9566a5859928ee4c10334bb | |
| parent | add back the resume and generate it with nix (diff) | |
| download | fcuny.net-bf56a9edfcca610bc771e0176f72bbce59fcc87a.tar.gz | |
large cleanup
49 files changed, 249 insertions, 1737 deletions
diff --git a/.github/workflows/check-links.yaml b/.github/workflows/check-links.yaml index ea20952..279c312 100644 --- a/.github/workflows/check-links.yaml +++ b/.github/workflows/check-links.yaml @@ -9,30 +9,34 @@ on: jobs: lychee: runs-on: ubuntu-latest + permissions: + issues: write steps: - uses: actions/checkout@v4 - - uses: DeterminateSystems/nix-installer-action@main - - uses: DeterminateSystems/magic-nix-cache-action@main + - uses: DeterminateSystems/nix-installer-action@v16 + - uses: DeterminateSystems/magic-nix-cache-action@v8 - name: Build the site run: nix build --print-build-logs - name: Restore lychee cache - uses: actions/cache@v3 + uses: actions/cache/restore@v4 with: path: .lycheecache - key: cache-lychee-${{ hashFiles('**/*.md') }} - restore-keys: cache-lychee- + key: lychee-cache - name: Check links id: lychee - uses: lycheeverse/lychee-action@v1 + uses: lycheeverse/lychee-action@v2 with: - args: --verbose --no-progress './result/**/*.html' - output: ./lycheeresult.md - env: - GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} + fail: false + - name: Save lychee cache + uses: actions/cache/save@v4 + if: always() + with: + key: lychee-cache + path: .lycheecache - name: Create issue - if: ${{ github.event_name != 'pull_request' && env.lychee_exit_code != 0 }} - uses: peter-evans/create-issue-from-file@v4 + if: steps.lychee.outputs.exit_code != 0 + uses: peter-evans/create-issue-from-file@v5 with: title: "[lychee] Broken links" - content-filepath: ./lycheeresult.md + content-filepath: ./lychee/out.md labels: bug, automated issue diff --git a/.github/workflows/page.yml b/.github/workflows/page.yml index 84fec1e..33864e3 100644 --- a/.github/workflows/page.yml +++ b/.github/workflows/page.yml @@ -18,12 +18,14 @@ jobs: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - - uses: DeterminateSystems/nix-installer-action@main - - uses: DeterminateSystems/magic-nix-cache-action@main - - name: flake check + - uses: DeterminateSystems/nix-installer-action@v16 + - uses: DeterminateSystems/magic-nix-cache-action@v8 + - name: nix check run: nix flake check + - name: nix fmt + run: nix fmt - name: Build the site - run: nix build --print-build-logs + run: nix build - name: Upload artifact uses: actions/upload-pages-artifact@v3 with: diff --git a/content/1password-ssh-agent.md b/content/1password-ssh-agent.md index 5d5d436..1a32267 100644 --- a/content/1password-ssh-agent.md +++ b/content/1password-ssh-agent.md @@ -1,8 +1,6 @@ +++ title = "1password's ssh agent and nix" date = 2023-12-02 -[taxonomies] -tags = ["nix"] +++ [A while ago](https://blog.1password.com/1password-ssh-agent/), 1password introduced an SSH agent, and I've been using it for a while now. The following describe how I've configured it with `nix`. All my ssh keys are in 1password, and it's the only ssh agent I'm using at this point. diff --git a/content/_index.md b/content/_index.md index 6c76dec..55d782f 100644 --- a/content/_index.md +++ b/content/_index.md @@ -4,8 +4,6 @@ title: "home" 👋 My name is Franck Cuny and this is my little corner on the web. -I currently work as a [Site Reliability Engineer](https://en.wikipedia.org/wiki/Site_reliability_engineering) (SRE) at [Roblox](https://www.roblox.com). Previously, I worked as a SRE at [Twitter](https://twitter.com/TwitterEng). My focus is on Platform and large infrastructure. +I'm a Technical Director at [Roblox](https://www.roblox.com), where I'm focusing on Reliability. Previously, I worked at [Twitter](https://twitter.com/TwitterEng) for close to 8 years, where I was a Sr Staff Site Reliability Engineer. My focus is on Platform and Compute infrastructure. -I'm interested in building sustainable teams, improving the management and operation of large infrastructure, and to work with different teams to implement best practices around reliability and security. - -The simplest way to contact me is via <a href="mailto:franck@fcuny.net">email</a>. Some of my code is hosted <a href="https://git.fcuny.net">there</a>. +The simplest way to contact me is via <a href="mailto:franck@fcuny.net">email</a>. diff --git a/content/container-security-summit-2019.md b/content/container-security-summit-2019.md deleted file mode 100644 index 3ec8149..0000000 --- a/content/container-security-summit-2019.md +++ /dev/null @@ -1,83 +0,0 @@ -+++ -title = "Container Security Summit 2019" -date = 2019-02-20 -[taxonomies] -tags = ["conference", "containers"] -+++ - -This was the 4th edition of the summit. - -- [Program](https://cloudplatformonline.com/2019-NA-Container-Security-Summit-Agenda.html) -- [slides](https://cloudplatformonline.com/2019-NA-Container-Security-Summit-Agenda.html) -- [another summary](https://cloud.google.com/blog/products/containers-kubernetes/exploring-container-security-four-takeaways-from-container-community-summit-2019) - -There was a number of talks and panels. Santhosh and Chris P. were there too, and they might have a different perspective. - -- There was some conversation about Root-less containers - - Running root-less containers is not there yet (it’s possible to do it, but it’s not a great experience). - - Challenge is to have the runc daemon to not run as root - - If you can escape the container it's game over - - But it seems to be a goal for this year - - Once you start mocking around with /proc you’re going to cry - - Root-less Build for containers, however, is here, and is a good thing. - - We talked a little bit about reproducible build. - - Debian and some other distros / groups are putting a lot of efforts here -- Someone shared some recommendations when setting a k8s cluster - - Don’t let Pods access node’s IAM role in metadata endpoint - - This can be done via `networkPolicy` - - Disable auto-mount for SA tokens - - Prevent creation of privileged pods - - Prevent kubelets from accessing secrets for pods on other nodes -- `ebpf` is the buzzword of the year - - Stop using `iptables` and only use `ebpf` -- GKE on prem is clearly not for us (we knew it) - - We talked with a Google engineer working on the product - - You need to run vsphere, which increases the cost - - This is likely a temporary solution - - We would still have to deal with hardware -- During one session we talked about isolating workloads - - We will want various clusters for various environment (dev / staging / prod) - - This will make our life easier for upgrading them - - Someone from Amazon (bob wise, previously head of SIG scalability) recommended namespace per service - - They act as quota boundaries -- Google is working on tooling to manage namespaces across clusters - - Unclear about timeline -- Google is also working on tooling to manage clusters - - But unclear (to me) if it's for GKE, on prem, or both -- Talked about CIS benchmark for Docker and kubernetes - - The interesting part here (IMO) was the template they use to make recommendation. This is something we should look at for our RFC process when it comes to operational work. - - I’ll try to find that somewhere (hopefully we will get the slides) -- Auditing is a challenge because very little recommendation for hosted kubernetes - - There’s a benchmark for Docker and k8s - - A robust CD pipeline is required - - That’s where organizations should invest - - Stop patching just rebuild and deploy - - You want to get it done fast -- Average life for a container is less than 2 weeks -- Conversations about managing security issues - - They shared the postmortem for first high profile CVE for kubernetes - - Someone from red hat talked about the one for runc - - There's desire to uniformize the way to handle these type of issues - - The guy from RH thinks the way they managed the runc one was not great (it leaked too early) - - There's a list for vendors to communicate and share these issues -- Talked about runc issue - - Containers are hard - - Means different things to different people - - We make a lot of assumptions and this break a lot of stuff -- Kubernetes secrets are not great (but no details why) - - Concerning: no one was running kubernetes on prem, just someone with a POC and his comment was “it sucks” -- Some projects mentioned - - In toto - - Buildkit - - Umoci - - Sysdig -- Some talk about service mesh (mostly istio) - - Getting mtls right is hard, use service mesh to get it right -- API endpoint from vm can be accessed from container - - Google looking at ways to make this go away - - Too much of a risk (someone showed how to exploit this on aws) -- There was a panel with a few auditing companies, I did not register anything from it - - Container security is hard and very few people understand it - - I don’t remember what was the context, but someone mentioned this bug as an example why containers / isolation is hard -- There’s apparently some conversations about introducing a new Tenant object - - I have not been able to find this in tickets / mailing lists so far, would need to reach out to Google for this ? diff --git a/content/container-security-summit-2020.md b/content/container-security-summit-2020.md deleted file mode 100644 index 2c3f122..0000000 --- a/content/container-security-summit-2020.md +++ /dev/null @@ -1,59 +0,0 @@ -+++ -title = "Container Security Summit 2020" -date = 2020-02-12 -[taxonomies] -tags = ["conference", "containers"] -+++ - -This is the second time I go to this event, organized by Google in their Seattle office (the one in Fremont). - -As for last year, the content was pretty unequal. The first talk by Kelsey was interesting: one of the main concern that we have is around supply chain: where are our dependencies coming from ? We pull random libraries from all over the place, and no one read the code or try to see if there's vulnerabilities. Same is true with the firmware, bios, etc that we have in the hardware, by the way. - -The second talk completely went over my head, it was really not interesting. I'm going to guess that Sky (the company that was presenting) is a big Google Cloud customer and they were asked to do that presentation. - -We had a few more small talks, but nothing really great. One of the presentation was by an Australian bank (Up) and they were showing how they get slack notification when someone logs in a container. I hate this trend of sending everything to slack. - -After lunch there was a few more talks, again, nothing really interesting. There's a bunch of people in this community that have a lot of hype, but are not that great presenters or don't really have anything really interesting to present. - -The "un-conference" part was more interesting. There was two sessions that interested me: supply chain and PSPs. I went to the PSP one, and again, a couple of people suck all the air in the room and it's a dialogue, not a group conversation. The goal was to talk about PSP vs. OPA, but really we talked more about the challenges of PSPs and of moving out of them. The current consensus is to says that we need 3 PSPs: default, restrictive, permissive. Then all implementations (PSPs, OPA, etc) should support them, and they should offer more or less the same security level. Another thing considered is to let the CD pipeline take care of that. EKS / GKE have a challenge with a possible migration: how to move their customers, and to what. - -Overall, I think we are doing the right things in term of security: we have PSPs, we have our some controllers to ensure policies, etc. We are also looking at automatically upgrade containers using workflows (having a robust CI/CD pipeline is key here). - -<a id="org4ab3e9d"></a> - -# Some notes to followup / read - -- twitcher / host network / follow up on that -- <https://github.com/cruise-automation/k-rail> -- better error message for failures -- it's not a replacement to PSPs ? -- <https://cloud.google.com/binary-authorization> -- [falco](https://github.com/falcosecurity/falco) - -conversation about isolation: - -- <https://katacontainers.io/> - - could kata be a use case for collocation of storage ? -- <https://github.com/google/gvisor> - -talk about beyondprod (brandon baker) - -- <https://cloud.google.com/security/beyondprod/> -- binary authorization for borg -- security infra design white paper -- questions: - - latency for requests ? kerberos is not optimized, alts is - - <https://cloud.google.com/security/encryption-in-transit/application-layer-transport-security> - -panels: - -- small adoption of OPAh - -kubernetes audit logging: - -- <https://kubernetes.io/docs/tasks/debug-application-cluster/audit/> -- <https://github.com/google/docker-explorer> -- <https://github.com/google/turbinia> -- <https://github.com/google/timesketch> -- plaso (?) -- <https://github.com/google/grr> diff --git a/content/containerd-to-firecracker.md b/content/containerd-to-firecracker.md index df26cba..2a6ba58 100644 --- a/content/containerd-to-firecracker.md +++ b/content/containerd-to-firecracker.md @@ -1,8 +1,6 @@ +++ title = "containerd to firecracker" date = 2021-05-15 -[taxonomies] -tags = ["containers"] +++ fly.io had an [interesting @@ -571,12 +569,12 @@ The end result: [ 0.079206] PTP clock support registered [ 0.079741] NetLabel: Initializing [ 0.080111] NetLabel: domain hash size = 128 - [ 0.080529] NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO + [ 0.080529] NetLabel: protocols = UNLABELED CHIPSOv4 CALIPSO [ 0.081113] NetLabel: unlabeled traffic allowed by default [ 0.082072] clocksource: Switched to clocksource kvm-clock [ 0.082715] VFS: Disk quotas dquot_6.6.0 [ 0.083123] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) - [ 0.083855] pnp: PnP ACPI: disabled + [ 0.083855] pnp: OnP ACPI: disabled [ 0.084510] NET: Registered protocol family 2 [ 0.084718] tcp_listen_portaddr_hash hash table entries: 256 (order: 0, 4096 bytes, linear) [ 0.085602] TCP established hash table entries: 4096 (order: 3, 32768 bytes, linear) diff --git a/content/cpu-power-management.md b/content/cpu-power-management.md deleted file mode 100644 index 922f081..0000000 --- a/content/cpu-power-management.md +++ /dev/null @@ -1,121 +0,0 @@ -+++ -title = "CPU power management" -date = 2023-01-22 -[taxonomies] -tags = ["hardware"] -+++ - -## Maximum power consumption of a processor - -Our Intel CPU has a thermal design power (TDP) of 120W. The AMD CPU has a TDP of 200W. - -The Intel CPU has 80 cores while the AMD one has 128 cores. For Intel, this gives us 1.5W per core, while for AMD, 1.56W. - -The TDP is the average value the processor can sustain forever, and this is the power the cooling solution needs to be designed at for reliability. The TDP is measured under worst case load, with all cores running at 1.8Ghz (the base frequency). - -## C-State vs. P-State - -We have two ways to control the power consumption: - -- disabling a subsystem -- decrease the voltage - -This is done by using - -- _C-State_ is for optimization of power consumption -- _P-State_ is for optimization of the voltage and CPU frequency - -_C-State_ means that one or more subsystem are executing nothing, one or more subsystem of the CPU is at idle, powered down. - -_P-State_ the subsystem is actually running, but it does not require full performance, so the voltage and/or frequency it operates is decreased. - -The states are numbered starting from 0. The higher the number, the more power is saved. `C0` means no power saving. `P0` means maximum performance (thus maximum frequency, voltage and power used). - -### C-state - -A timeline of power saving using C states is as follow: - -1. normal operation is at c0 -2. the clock of idle core is stopped (C1) -3. the local caches (L1/L2) of the core are flushed and the core is powered down (C3) -4. when all the cores are powered down, the shared cache of the package (L3/LLC) are flushed and the whole package/CPU can be powered down - -| state | description | -| ----- | --------------------------------------------------------------------------------------------------------------------------- | -| C0 | operating state | -| C1 | a state where the processor is not executing instructions, but can return to an executing state essentially instantaneously | -| C2 | a state where the processor maintains all software-visible state, but may take longer to wake up | -| C3 | a state where the processor does not need to keep its cache coherent, but maintains other state | - -Running `cpuid` we can find all the supported C-states for a processor (Intel(R) Xeon(R) Gold 6122 CPU @ 1.80GHz): - -``` - MONITOR/MWAIT (5): - smallest monitor-line size (bytes) = 0x40 (64) - largest monitor-line size (bytes) = 0x40 (64) - enum of Monitor-MWAIT exts supported = true - supports intrs as break-event for MWAIT = true - number of C0 sub C-states using MWAIT = 0x0 (0) - number of C1 sub C-states using MWAIT = 0x2 (2) - number of C2 sub C-states using MWAIT = 0x0 (0) - number of C3 sub C-states using MWAIT = 0x2 (2) - number of C4 sub C-states using MWAIT = 0x0 (0) - number of C5 sub C-states using MWAIT = 0x0 (0) - number of C6 sub C-states using MWAIT = 0x0 (0) - number of C7 sub C-states using MWAIT = 0x0 (0) -``` - -If I interpret this correctly: - -- there's one `C0` -- there's two sub C-states for `C1` -- there's two sub C-states for `C3` - -### P-state - -Being in P-states means the CPU core is also in `C0`, since it has to be powered to execute some code. - -P-states allow to change the voltage and frequency of the CPU core to decrease the power consumption. - -A P-state refers to different frequency-voltage pairs. The highest operating point is the maximum state which is `P0`. - -| state | description | -| ----- | ------------------------------------------ | -| P0 | maximum power and frequency | -| P1 | less than P0, voltage and frequency scaled | -| P2 | less than P1, voltage and frequency scaled | - -## ACPI power state - -The ACPI Specification defines the following four global "Gx" states and six sleep "Sx" states - -| GX | name | Sx | description | -| ---- | -------------- | ---- | --------------------------------------------------------------------------------- | -| `G0` | working | `S0` | The computer is running and executing instructions | -| `G1` | sleeping | `S1` | Processor caches are flushed and the CPU stop executing instructions | -| `G1` | sleeping | `S2` | CPU powered off, dirty caches flushed to RAM | -| `G1` | sleeping | `S3` | Suspend to RAM | -| `G1` | sleeping | `S4` | Suspend to disk, all content of the main memory is flushed to non volatile memory | -| `G2` | soft off | `S5` | PSU still supplies power, a full reboot is required | -| `G3` | mechanical off | `S6` | The system is safe for disassembly | - -When we are in any C-states, we are in `G0`. - -## Speed Select Technology - -[Speed Select Technology](https://en.wikichip.org/wiki/intel/speed_select_technology) is a set of power management controls that allows a system administrator to customize per-core performance. By configuring the performance of specific cores and affinitizing workloads to those cores, higher software performance can be achieved. SST supports multiple types of customization: - -- Frequency Prioritization (SST-CP) - allows specific cores to clock higher by reducing the frequency of cores running lower-priority software. -- Speed Select Base Freq (SST-BF) - allows specific cores to run higher base frequency (P1) by reducing the base frequencies (P1) of other cores. - -## Turbo Boost - -TDP is the maximum power consumption the CPU can sustain. When the power consumption is low (e.g. many cores are in P1+ states), the CPU frequency can be increased beyond base frequency to take advantage of the headroom, since this condition does not increase the power consumption beyond TDP. - -Modern CPUs are heavily reliant on "Turbo(Intel)" or "boost (AMD)" ([TBT](https://en.wikichip.org/wiki/intel/turbo_boost_technology) and [TBTM](https://en.wikichip.org/wiki/intel/turbo_boost_max_technology)). - -In our case, the Intel 6122 is rated at 1.8GHz, A.K.A "stamp speed". If we want to run the CPU at a consistent frequency, we'd have to choose 1.8GHz or below, and we'd lose significant performance if we were to disable turbo/boost. - -### Turbo boost max - -During the manufacturing process, Intel is able to test each die and determine which cores possess the best overclocking capabilities. That information is then stored in the CPU in order from best to worst. diff --git a/content/fogcutter.md b/content/fogcutter.md deleted file mode 100644 index 9ae6b98..0000000 --- a/content/fogcutter.md +++ /dev/null @@ -1,63 +0,0 @@ -+++ -title = "SOMA Fog Cutter" -date = 2024-09-22 -template = "bike.html" -[taxonomies] -tags = ["bike"] -+++ - -A [SOMA](https://www.somafab.com/archives/product/fog-cutter-frame-set) [Fog Cutter](https://www.somafab.com/archives/product/fog-cutter-frame-set) road bike, build by [Blue Heron Bike](https://www.blueheronbikesberkeley.com/bike-accessories) in Berkeley. The size of the frame is 58cm and the color is blue. It comes with a carbon fork. - -<div id="carousel-container"></div> - -<script> - // Specify the images for this carousel - const pageImages = [ - '/images/fogcutter/IMG_0988.jpeg', - '/images/fogcutter/IMG_0989.jpeg', - '/images/fogcutter/IMG_0990.jpeg', - '/images/fogcutter/IMG_0991.jpeg', - '/images/fogcutter/IMG_0992.jpeg', - '/images/fogcutter/IMG_0993.jpeg', - '/images/fogcutter/IMG_0994.jpeg', - '/images/fogcutter/IMG_0995.jpeg', - '/images/fogcutter/IMG_0996.jpeg', - '/images/fogcutter/IMG_0997.jpeg', - '/images/fogcutter/IMG_0998.jpeg', - '/images/fogcutter/IMG_0999.jpeg', - '/images/fogcutter/IMG_1001.jpeg', - '/images/fogcutter/IMG_1002.jpeg', - ]; - - // Check if the initializeCarousel function is available - if (typeof initializeCarousel === 'function') { - // Initialize the carousel - initializeCarousel('carousel-container', pageImages); - } else { - console.error('Carousel initialization function not found. Make sure carousel.js is properly loaded.'); - } -</script> - -## Part list - -| part | model | -| -------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Frame | [SOMA Fog Cutter](https://www.somafab.com/archives/product/fog-cutter-frame-set) 58cm in blue | -| Fork | [Soma Fork Fog Cutter Carbon Cerulean Blue (Thru-Axle)](https://www.somafabshop.com/shop/231007-soma-fork-fog-cutter-carbon-cerulean-blue-thru-axle-5617?search=cerulean&product=product.template%285617%2C%29#attr=) | -| Headset | White Industries | -| Front and rear wheel | [DT Swiss XR 331 29 20 mm DB VI](https://www.dtswiss.com/en/support/product-support?matnr=RTXR3329N28S011223) | -| Tire | trevail rampart 38 | -| Front hub | [SP dynamo PL7](https://www.sp-dynamo.com/series7-pl7) | -| Rear hub | [Shimano Tiagra rs740](https://bike.shimano.com/en-US/product/component/tiagra-4700/FH-RS470.html) | -| Rear derailleur | [Shimano Ultegra RX 11 speed](https://bike.shimano.com/en-US/product/component/ultegra-rx/RD-RX800-GS.html) | -| Front derailleur | [Shimano Metrea 2x11 speed](https://bike.shimano.com/en-US/product/component/metrea-u5000/FD-U5000-F.html) | -| Handlebar | [Zipp Service Course 70 Ergo Drop Handlebar 42cm](https://www.sram.com/en/zipp/models/hb-dbsc-7e-b2) | -| Brifter | [Shimano Dura Ace 9120](https://bike.shimano.com/en-US/product/component/duraace-r9100/ST-R9120-R.html) | -| Saddle | [Brooks C15 black](https://www.brooksengland.com/en_us/c15.html) | -| Seat post | [SIM Works Beatnik post (black)](https://www.sim.works/products/beatnik-post-1) | -| Front light | [Busch & Müller Lumotec IQ-X Headlamp](https://www.bumm.de/en/products/dynamo-scheinwerfer/produkt/164rtsndi-01-schwarz-164rtsndi-silber%20.html) | -| Brake calipers | [Shimano rs785](https://bike.shimano.com/en-EU/product/component/ultegra-6870-di2/BR-RS785.html) | -| Crank | [White Industries Square Taper road cranks](https://www.whiteind.com/product/square-taper-road-cranks/) | -| Chain ring | [White Industries 52/32](https://www.whiteind.com/product/vbc-chainring-sets/) | -| Pedal | Shimano PD-R550 SPD-SL (black) - can change for SPD if preferred | -| Bar tape | [Lizzard Skin (brown)](https://www.lizardskins.com/cycling) | diff --git a/content/git-link-and-sourcegraph.md b/content/git-link-and-sourcegraph.md index c86b465..5a99535 100644 --- a/content/git-link-and-sourcegraph.md +++ b/content/git-link-and-sourcegraph.md @@ -1,8 +1,6 @@ +++ title = "emacs' git-link and sourcegraph" date = 2021-08-24 -[taxonomies] -tags = ["emacs"] +++ I use [sourcegraph](https://sourcegraph.com/) for searching code, and I sometimes need to share a link to the source code I'm looking at in a buffer. For this, the package [`git-link`](https://github.com/sshaw/git-link) is great. diff --git a/content/google-doc-failure.md b/content/google-doc-failure.md index b4a65b9..ceddb65 100644 --- a/content/google-doc-failure.md +++ b/content/google-doc-failure.md @@ -1,8 +1,6 @@ +++ title = "Google Doc Failures" date = 2021-04-11 -[taxonomies] -tags = ["practices"] +++ In most use cases, Google Doc is an effective tool to create "write once, read never" documents. @@ -27,7 +25,7 @@ The second reason, and it's the most important one, I know that if I need to rea In 'the old days', you'd start a new document in Word or LibreOffice, and as you hit "save" for the first time, you've two decisions to make: how am I going to name that file, and where am I going to save it on disk. -With GDoc these questions don't have to be answered, you don't have to name the file, and it does not matter where it lives. I've likely hundreds of docs named 'untitled' in my "drive". I also don't have to think about where they will live, because they are saved automatically for me. I'm sure there's hundreds of studies that show that these two simple steps are actually complex for many users and creates useless friction (in which folder do I store it; should I organize the docuemnts by team, years, projects; do I name it with the date and the current project; etc.). +With GDoc these questions don't have to be answered, you don't have to name the file, and it does not matter where it lives. I've likely hundreds of docs named 'untitled' in my "drive". I also don't have to think about where they will live, because they are saved automatically for me. I'm sure there's hundreds of studies that show that these two simple steps are actually complex for many users and creates useless friction (in which folder do I store it; should I organize the documents by team, years, projects; do I name it with the date and the current project; etc.). GDoc being a Google product, it seems pretty obvious that they would come up with a better solution: let's not organize in a strict hierarchy these files, and let's instead search for them. diff --git a/content/leaving-twitter.md b/content/leaving-twitter.md index f7d98f5..38267cd 100644 --- a/content/leaving-twitter.md +++ b/content/leaving-twitter.md @@ -1,8 +1,6 @@ +++ title = "Leaving Twitter" date = 2022-01-15 -[taxonomies] -tags = ["work"] +++ January 7th 2022 was my last day at Twitter, after more than 7 years at the company. diff --git a/content/making-sense-intel-amd-cpus.md b/content/making-sense-intel-amd-cpus.md deleted file mode 100644 index 9d1ce84..0000000 --- a/content/making-sense-intel-amd-cpus.md +++ /dev/null @@ -1,236 +0,0 @@ -+++ -title = "Making sense of Intel and AMD CPUs naming" -date = 2021-12-29 -[taxonomies] -tags = ["hardware"] -+++ - -## Intel - -### Core - -The line up for the core family is i3, i5, i7 and i9. As of January 2023, the current generation is [Raptor Lake](https://en.wikipedia.org/wiki/Raptor_Lake) (13th generation). - -The brand modifiers are: - -- **i3**: laptops/low-end desktop -- **i5**: mainstream users -- **i7**: high-end users -- **i9**: enthusiast users - -How to read a SKU ? Let's use the [i7-12700K](https://ark.intel.com/content/www/us/en/ark/products/134594/intel-core-i712700k-processor-25m-cache-up-to-5-00-ghz.html) processor: - -- **i7**: high end users -- **12**: 12th generation -- **700**: SKU digits, usually assigned in the order the processors - are developed -- **K**: unlocked - -List of suffixes: - -| suffix | meaning | -| ------ | -------------------------------------- | -| G.. | integrated graphics | -| E | embedded | -| F | require discrete graphic card | -| H | high performance for mobile | -| HK | high performance for mobile / unlocked | -| K | unlocked | -| S | special edition | -| T | power optimized lifestyle | -| U | mobile power efficient | -| Y | mobile low power | -| X/XE | unlocked, high end | - -> **Unlocked,** what does that means ? A processor with the **K** suffix -> is made with the an unlocked clock multiplier. When used with some -> specific chipset, it's possible to overclock the processor. - -#### Raptor Lake (13th generation) - -Raptor lake is an hybrid architecture, featuring both P-cores (performance cores) and E-cores (efficient cores), similar to Alder lake. P-cores are based on the [Raptor cove](https://en.wikipedia.org/wiki/Golden_Cove#Raptor_Cove) architecture, while the E-cores are based on the [Gracemont](<https://en.wikipedia.org/wiki/Gracemont_(microarchitecture)>) architecture (same as for Alder lake). - -Available processors: - -| model | p-cores | e-cores | GHz (base) | GHz (boosted) | TDP | -| ---------- | ------- | ------- | ---------- | ------------- | -------- | -| i9-13900KS | 8 (16) | 16 | 3.2/2.4 | 6/4.3 | 150/253W | -| i9-13900K | 8 (16) | 16 | 3.0/2.0 | 5.8/4.3 | 125/253W | -| i9-13900KF | 8 (16) | 16 | 3.0/2.0 | 5.8/4.3 | 125/253W | -| i9-13900 | 8 (16) | 16 | 2.0/1.5 | 5.2/4.2 | 65/219W | -| i9-13900F | 8 (16) | 16 | 2.0/1.5 | 5.2/4.2 | 65/219W | -| i9-13900T | 8 (16) | 16 | 1.1/0.8 | 5.1/3.9 | 35/219W | -| i7-13700K | 8 (16) | 8 | 3.4/2.5 | 5.4/4.2 | 125/253W | -| i7-13700KF | 8 (16) | 8 | 3.4/2.5 | 5.4/4.2 | 125/253W | -| i7-13700 | 8 (16) | 8 | 2.1/1.5 | 5.1/4.1 | 65/219W | -| i7-13700F | 8 (16) | 8 | 2.1/1.5 | 5.1/4.1 | 65/219W | -| i7-13700T | 8 (16) | 8 | 1.4/1.0 | 4.8/3.6 | 35/106W | -| i5-13600K | 6 (12) | 8 | 3.5/2.6 | 5.1/3.9 | 125/181W | -| i5-13600KF | 6 (12) | 8 | 3.5/2.6 | 5.1/3.9 | 125/181W | - -For the Raptor Lake generation, as for the Alder lake generation, the supported socket is the [LGA<sub>1700</sub>](https://en.wikipedia.org/wiki/LGA_1700). - -List of Raptor lake chipsets: -| feature | b760[^7] | h770[^8] | z790[^9] | -|-----------------------------|----------|----------|----------| -| P and E cores over clocking | no | no | yes | -| memory over clocking | yes | yes | yes | -| DMI 4 lanes | 4 | 8 | 8 | -| chipset PCIe 5.0 lanes | | | | -| chipset PCIe 4.0 lanes | | | | -| chipset PCIe 3.0 lanes | | | | -| SATA 3.0 ports | up to 4 | up to 8 | up to 8 | - -#### Alder Lake (12th generation) - -Alder lake is an hybrid architecture, featuring both P-cores (performance cores) and E-cores (efficient cores). P-cores are based on the [Golden Cove](https://en.wikipedia.org/wiki/Golden_Cove) architecture, while the E-cores are based on the [Gracemont](<https://en.wikipedia.org/wiki/Gracemont_(microarchitecture)>) architecture. - -This is a [good article](https://www.anandtech.com/show/16881/a-deep-dive-into-intels-alder-lake-microarchitectures/2) to read about this model. Inside the processor there's a microcontroller that monitors what each thread is doing. This can be used by the OS scheduler to hint on which core a thread should be scheduled on (between performance or efficiency). - -As of December 2021 this is not yet properly supported by the Linux kernel. - -Available processors: - -| model | p-cores | e-cores | GHz (base) | GHz (boosted) | TDP | -| ---------- | ------- | ------- | ---------- | ------------- | ---- | -| i9-12900K | 8 (16) | 8 | 3.2/2.4 | 5.1/3.9 | 241W | -| i9-12900KF | 8 (16) | 8 | 3.2/2.4 | 5.1/3.9 | 241W | -| i7-12700K | 8 (16) | 4 | 3.6/2.7 | 4.9/3.8 | 190W | -| i7-12700KF | 8 (16) | 4 | 3.6/2.7 | 4.9/3.8 | 190W | -| i5-12600K | 6 (12) | 4 | 3.7/2.8 | 4.9/3.6 | 150W | -| i5-12600KF | 6 (12) | 4 | 3.7/2.8 | 4.9/3.6 | 150W | - -- support DDR4 and DDR5 (up to DDR5-4800) -- support PCIe 4.0 and 5.0 (16 PCIe 5.0 and 4 PCIe 4.0) - -For the Alder Lake generation, the supported socket is the [LGA<sub>1700</sub>](https://en.wikipedia.org/wiki/LGA_1700). - -For now only supported chipset for Alder Lake are: - -| feature | z690[^1] | h670[^2] | b660[^3] | h610[^4] | q670[^6] | w680[^5] | -| --------------------------- | -------- | -------- | -------- | -------- | -------- | -------- | -| P and E cores over clocking | yes | no | no | no | no | yes | -| memory over clocking | yes | yes | yes | no | - | yes | -| DMI 4 lanes | 8 | 8 | 4 | 4 | 8 | 8 | -| chipset PCIe 4.0 lanes | up to 12 | up to 12 | up to 6 | none | | | -| chipset PCIe 3.0 lanes | up to 16 | up to 12 | up to 8 | 8 | | | -| SATA 3.0 ports | up to 8 | up to 8 | 4 | 4 | up to 8 | up to 8 | - -### Xeon - -Xeon is the brand of Intel processor designed for non-consumer servers and workstations. The most recent generations are: - -| name | availability | -| --------------- | ------------ | -| Skylake | 2015 | -| Cascade lake | 2019 | -| Cooper lake | 2022 | -| Sapphire rapids | 2023 | - -The following brand identifiers are used: - -- platinium -- gold -- silver -- bronze - -## AMD - -### Ryzen - -There are multiple generation for this brand of processors. They are based on the [zen micro architecture](<https://en.wikipedia.org/wiki/Zen_(microarchitecture)>). - -The current (as of January 2023) generation is Ryzen 7000. - -The brand modifiers are: - -- ryzen 3: entry level -- ryzen 5: mainstream -- ryzen 9: high end performance -- ryzen 9: enthusiast - -List of suffixes: - -| suffix | meaning | -| ------ | ------------------------------------------------------------------------------- | -| X | high performance | -| G | integrated graphics | -| T | power optimized lifecycle | -| S | low power desktop with integrated graphics | -| H | high performance mobile | -| U | standard mobile | -| M | low power mobile | -| 3D | feature [3D V-cache technology](https://www.amd.com/en/technologies/3d-v-cache) | - -### EPYC - -EPYC is the AMD brand of processors for the server market, based on the zen architecture. They use the [SP3](https://en.wikipedia.org/wiki/Socket_SP3) socket. The EPYC processor is chipset free. - -### Threadripper - -The threadripper is for high performance desktop. It uses the [TR4](https://en.wikipedia.org/wiki/Socket_TR4) socket. At the moment there's only one chipset that supports this process, the [X399](https://en.wikipedia.org/wiki/List_of_AMD_chipsets#TR4_chipsets). - -The threadripper based on zen3 architecture is not yet released, but it's expected to hit the market in the first half of Q1 2022. - -### Sockets/Chipsets - -The majority of these processors use the [AM4 socket](https://en.wikipedia.org/wiki/Socket_AM4). The threadripper line uses different sockets. - -There are multiple [chipset](https://en.wikipedia.org/wiki/Socket_AM4#Chipsets) for the AM4 socket. The more advanced ones are the B550 and the X570. - -The threadripper processors use the TR4, sTRX4 and sWRX8 sockets. - -### Zen 3 - -Zen 3 was released in November 2020. - -| model | cores | GHz (base) | GHz (boosted) | PCIe lanes | TDP | -| ------------- | ------- | ---------- | ------------- | ---------- | ---- | -| ryzen 5 5600x | 6 (12) | 3.7 | 4.6 | 24 | 65W | -| ryzen 7 5800 | 8 (16) | 3.4 | 4.6 | 24 | 65W | -| ryzen 7 5800x | 8 (16) | 3.8 | 4.7 | 24 | 105W | -| ryzen 9 5900 | 12 (24) | 3.0 | 4.7 | 24 | 65W | -| ryzen 9 5900x | 12 (24) | 3.7 | 4.8 | 24 | 105W | -| ryzen 9 5950x | 16 (32) | 3.4 | 4.9 | 24 | 105W | - -- support PCIe 3.0 and PCIe 4.0 (except for the G series) -- only support DDR4 (up to DDR4-3200) - -### Zen 4 - -Zen 4 was released in September 2022. - -- only supports DDR 5 -- all desktop processors feature 28 (24 + 4) PCIe 5.0 lanes -- all desktop processors feature 2 x 4 lane PCIe interfaces (mostly for M.2 storage devices) - -| model | cores | GHz (base) | GHz (boosted) | TDP | -| --------------- | ------- | ---------- | ------------- | ---- | -| ryzen 5 7600x | 6 (12) | 4.7 | 5.3 | 105W | -| ryzen 5 7600 | 6 (12) | 3.8 | 5.1 | 65W | -| ryzen 7 7800X3D | 8 (16) | | 5.0 | 120W | -| ryzen 7 7700X | 8 (16) | 4.5 | 5.4 | 105W | -| ryzen 7 7700 | 8 (16) | 3.8 | 5.3 | 65W | -| ryzen 9 7900 | 12 (24) | 3.7 | 5.4 | 65W | -| ryzen 9 7900X | 12 (24) | 4.7 | 5.6 | 170W | -| ryzen 9 7900X3D | 12 (24) | 4.4 | 5.6 | 120W | -| ryzen 9 7950X | 16 (32) | 4.5 | 5.7 | 170W | -| ryzen 9 7950X3D | 16 (32) | 4.2 | 5.7 | 120W | - -[^1]: https://ark.intel.com/content/www/us/en/ark/products/218833/intel-z690-chipset.html - -[^2]: https://www.intel.com/content/www/us/en/products/sku/218831/intel-h670-chipset/specifications.html - -[^3]: https://ark.intel.com/content/www/us/en/ark/products/218832/intel-b660-chipset.html - -[^4]: https://www.intel.com/content/www/us/en/products/sku/218829/intel-h610-chipset/specifications.html - -[^5]: https://ark.intel.com/content/www/us/en/ark/products/218834/intel-w680-chipset.html - -[^6]: https://ark.intel.com/content/www/us/en/ark/products/218827/intel-q670-chipset.html - -[^7]: https://www.intel.com/content/www/us/en/products/sku/229719/intel-b760-chipset/specifications.html - -[^8]: https://www.intel.com/content/www/us/en/products/sku/229720/intel-h770-chipset.html - -[^9]: https://www.intel.com/content/www/us/en/products/sku/229721/intel-z790-chipset/specifications.html diff --git a/content/nix-raid-systemd-boot.md b/content/nix-raid-systemd-boot.md index de68695..fc3a363 100644 --- a/content/nix-raid-systemd-boot.md +++ b/content/nix-raid-systemd-boot.md @@ -1,8 +1,6 @@ +++ title = "Workaround md raid boot issue in NixOS 22.11" date = 2023-01-10 -[taxonomies] -tags = ["nix"] +++ For about a year now I've been running [NixOS](https://nixos.org/ "NixOS") on my personal machines. Yesterday I decided to go ahead and upgrade my NAS from NixOS 22.05 to [22.11](https://nixos.org/blog/announcements.html#nixos-22.11). On that machine, all the disks are encrypted, and there are two RAID0 devices. To unlock the drives, I log into the [SSH daemon running in `initrd`](https://nixos.wiki/wiki/Remote_LUKS_Unlocking), where I can type my passphrase. This time however, instead of a prompt to unlock the disk, I see the following message: diff --git a/content/no-ssh-to-prod.md b/content/no-ssh-to-prod.md index 9c2d20a..40de34f 100644 --- a/content/no-ssh-to-prod.md +++ b/content/no-ssh-to-prod.md @@ -1,8 +1,6 @@ +++ title = "No SSH to production" date = 2022-11-28 -[taxonomies] -tags = ["practices"] +++ It's not uncommon to hear talk about preventing engineers to SSH to production machines. While I think it's a noble goal, I think most organizations are not ready for it in the short or even medium term. diff --git a/content/resume.md b/content/resume.md deleted file mode 100644 index cb095da..0000000 --- a/content/resume.md +++ /dev/null @@ -1,169 +0,0 @@ -+++ -title = "Resume" -template = "resume.html" -date = 2024-08-10 -[taxonomies] -tags = ["work"] -+++ - -# Franck Cuny - -Technical Director Site Reliability Engineer - -Email: franck@fcuny.net | Phone: 415-617-5129 - -Results-driven Site Reliability Engineering leader with extensive experience in architecting, scaling, and optimizing large-scale distributed systems. Proven track record of driving reliability improvements, fostering cross-functional collaboration, and mentoring engineering talent. Dedicated to building resilient infrastructures and cultivating a strong reliability culture. - -## Core Competencies: - -- Technical leadership and mentorship -- Cross-team collaboration and communication -- Large-scale distributed systems architecture -- Reliability engineering and disaster recovery -- Infrastructure optimization and cost reduction -- Production readiness and failure testing methodologies - -## Career Focus: - -Seeking opportunities to lead transformative reliability initiatives, mentor the next generation of SREs, and drive architectural decisions that significantly enhance system resilience and performance at scale. - -# Experience - -## Roblox, San Mateo - -<table border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides"> - -<colgroup> -<col class="org-left" /> - -<col class="org-left" /> - -<col class="org-left" /> - -<col class="org-left" /> -</colgroup> -<tbody> -<tr> -<td class="org-left">Site Reliability Engineer</td> -<td class="org-left">Technical Directory (IC7)</td> -<td class="org-left">SRE Group</td> -<td class="org-left">August 2024 - to date</td> -</tr> -<tr> -<td class="org-left">Site Reliability Engineer</td> -<td class="org-left">Principal II (IC6)</td> -<td class="org-left">SRE Group</td> -<td class="org-left">Feb 2022 - August 2024</td> -</tr> -</tbody> -</table> - -As a Team Lead for the Site Reliability group, I define road-maps, milestones, and identify areas where SREs can partner with different teams to improve overall reliability of our infrastructure and services. Key projects and responsibilities include: - -- **Cell Architecture Implementation**: Led the SRE effort to transition from monolithic Compute clusters to a Cell architecture, significantly enhancing Roblox's infrastructure resilience and efficiency. Developed migration plans, identified necessary automation, and drove production readiness for this critical reliability improvement. - -- **Edge Infrastructure Migration**: Spearheaded the migration from HAproxy to Envoy at the edge, aimed at reducing failure domains, improving performance by streamlining the proxy chain, and enabling user traffic steering to specific cells from the edge. - -- **Active/Passive Reliability Lead**: Orchestrated the failover strategy across multiple teams, developing detailed action plans and validation procedures. Conducted comprehensive tests to ensure plan effectiveness. This work reduced the amount of time for a fail-over from days to hours. - -- **Reliability Culture Champion**: Mentored engineers of various levels (both SREs and SWEs), established a model for production readiness, and popularized the practice of running failure exercises for new large infrastructure projects. - -- **Technical Leadership**: Acted as tech lead on numerous projects, demonstrating strong cross-team collaboration skills. Provided technical guidance and mentorship to the SRE team, fostering a culture of reliability and continuous improvement. - -Key strengths include driving complex infrastructure projects, mentoring, setting reliability standards, and facilitating effective cross-team collaboration. - -## Twitter, San Francisco - -<table border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides"> -<colgroup> -<col class="org-left" /> -<col class="org-left" /> -<col class="org-left" /> -<col class="org-left" /> -</colgroup> -<tbody> -<tr> -<td class="org-left">Site Reliability Engineer</td> -<td class="org-left">Senior Staff</td> -<td class="org-left">Compute SRE</td> -<td class="org-left">Jan 2018 - Jan 2022</td> -</tr> -<tr> -<td class="org-left">Site Reliability Engineer</td> -<td class="org-left">Staff</td> -<td class="org-left">Storage SRE</td> -<td class="org-left">Aug 2014 - Jan 2018</td> -</tr> -</tbody> -</table> - -### Key Achievements and Responsibilities: - -- **Large-Scale Infrastructure Management**: Led SRE efforts for one of the world's largest compute clusters (Mesos), spanning hundred of thousands of nodes across multiple data centers. Defined KPIs and improved automation for managing a massive fleet of bare metal machines. - -- **Kubernetes Adoption**: Spearheaded the initiative to adopt Kubernetes for on-premise infrastructure, driving architectural decisions and implementation strategies. - -- **Cost Optimization**: Designed and implemented strategies that significantly improved hardware utilization, resulting in tens of millions of dollars in savings on hardware costs. - -- **Tech Leadership**: Served as Tech Lead for a team of 6 SREs supporting Compute infrastructure. Established critical team processes including on-call rotations and postmortem procedures. - -- **Cloud and On-Premise Expertise**: Led multiple efforts related to Kubernetes deployment and management, both in cloud environments and on-premise infrastructure. - -- **Storage Systems Migration**: Successfully migrated all pub-sub systems from bare-metal deployment to Aurora/Mesos, pioneering the adoption of the Compute orchestration platform among storage teams. This transition reduced operational overhead, decreased deployment times, and enhanced overall system reliability. - -- **Network Infrastructure Improvement**: Advocated for and implemented the adoption of 10Gb+ networking in data centers, enabling significant scaling improvements for storage systems. - -- **Cross-Functional Leadership**: Served as the SRE Tech Lead for the real time storage team, driving improvements in performance, operations, and automation across storage systems. - -I consistently demonstrated the ability to lead complex technical initiatives, deliver impactful projects on-time, optimize large-scale systems, and drive cross-functional collaboration to achieve significant improvements in infrastructure reliability, efficiency, and cost-effectiveness. - -## Say Media, San Francisco - -<table border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides"> - -<colgroup> -<col class="org-left" /> - -<col class="org-left" /> - -<col class="org-left" /> - -<col class="org-left" /> -</colgroup> -<tbody> -<tr> -<td class="org-left">Software Engineer</td> -<td class="org-left">Senior SWE</td> -<td class="org-left">Infrastructure</td> -<td class="org-left">Aug 2011 - Aug 2014</td> -</tr> -</tbody> -</table> - -During my time at Say Media, I worked on two different teams. I started as a software engineer in the platform team building APIs then I then transitioned to the operation team to develop tooling in order to increase the effectiveness of the engineering organization. - -## Linkfluence, Paris - -<table border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides"> - -<colgroup> -<col class="org-left" /> - -<col class="org-left" /> - -<col class="org-left" /> - -<col class="org-left" /> -</colgroup> -<tbody> -<tr> -<td class="org-left">Software Engineer</td> -<td class="org-left">Senior SWE</td> -<td class="org-left">Infrastructure</td> -<td class="org-left">July 2007 - July 2011</td> -</tr> -</tbody> -</table> - -I was one of the early engineers joining Linkfluence in 2007. I led the development of the company's crawler (web, feeds). I was responsible for defining the early architecture of the company, and designed the internal platforms (Service Oriented Architecture). -I contributed to multiple open source projects on behalf of the company and represented the company at numerous open source conferences in Europe. diff --git a/content/stuff-about-pcie.md b/content/stuff-about-pcie.md deleted file mode 100644 index 311e55f..0000000 --- a/content/stuff-about-pcie.md +++ /dev/null @@ -1,266 +0,0 @@ -+++ -title = "Stuff about PCIe" -date = 2022-01-03 -[taxonomies] -tags = ["hardware"] -+++ - -## Speed - -The most common versions are 3 and 4, while 5 is starting to be -available with newer Intel processors. - -| ver | encoding | transfer rate | x1 | x2 | x4 | x8 | x16 | -| --- | --------- | ------------- | ---------- | ----------- | ---------- | ---------- | ----------- | -| 1 | 8b/10b | 2.5GT/s | 250MB/s | 500MB/s | 1GB/s | 2GB/s | 4GB/s | -| 2 | 8b/10b | 5.0GT/s | 500MB/s | 1GB/s | 2GB/s | 4GB/s | 8GB/s | -| 3 | 128b/130b | 8.0GT/s | 984.6 MB/s | 1.969 GB/s | 3.94 GB/s | 7.88 GB/s | 15.75 GB/s | -| 4 | 128b/130b | 16.0GT/s | 1969 MB/s | 3.938 GB/s | 7.88 GB/s | 15.75 GB/s | 31.51 GB/s | -| 5 | 128b/130b | 32.0GT/s | 3938 MB/s | 7.877 GB/s | 15.75 GB/s | 31.51 GB/s | 63.02 GB/s | -| 6 | 128b/130 | 64.0 GT/s | 7877 MB/s | 15.754 GB/s | 31.51 GB/s | 63.02 GB/s | 126.03 GB/s | - -This is a -[useful](https://community.mellanox.com/s/article/understanding-pcie-configuration-for-maximum-performance) -link to understand the formula: - - Maximum PCIe Bandwidth = SPEED * WIDTH * (1 - ENCODING) - 1Gb/s - -We remove 1Gb/s for protocol overhead and error corrections. The main -difference between the generations besides the supported speed is the -encoding overhead of the packet. For generations 1 and 2, each packet -sent on the PCIe has 20% PCIe headers overhead. This was improved in -generation 3, where the overhead was reduced to 1.5% (2/130) - see -[8b/10b encoding](https://en.wikipedia.org/wiki/8b/10b_encoding) and -[128b/130b encoding](https://en.wikipedia.org/wiki/64b/66b_encoding). - -If we apply the formula, for a PCIe version 3 device we can expect -3.7GB/s of data transfer rate: - - 8GT/s * 4 lanes * (1 - 2/130) - 1G = 32G * 0.985 - 1G = ~30Gb/s -> 3750MB/s - -## Topology - -An easy way to see the PCIe topology is with `lspci`: - - $ lspci -tv - -[0000:00]-+-00.0 Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Root Complex - +-01.0 Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge - +-01.1-[01]----00.0 OCZ Technology Group, Inc. RD400/400A SSD - +-01.3-[02-03]----00.0-[03]----00.0 ASPEED Technology, Inc. ASPEED Graphics Family - +-01.5-[04]--+-00.0 Intel Corporation I350 Gigabit Network Connection - | +-00.1 Intel Corporation I350 Gigabit Network Connection - | +-00.2 Intel Corporation I350 Gigabit Network Connection - | \-00.3 Intel Corporation I350 Gigabit Network Connection - +-02.0 Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge - +-03.0 Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge - +-04.0 Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge - +-07.0 Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge - +-07.1-[05]--+-00.0 Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Function - | +-00.2 Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor - | \-00.3 Advanced Micro Devices, Inc. [AMD] Zeppelin USB 3.0 Host controller - +-08.0 Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge - +-08.1-[06]--+-00.0 Advanced Micro Devices, Inc. [AMD] Zeppelin/Renoir PCIe Dummy Function - | +-00.1 Advanced Micro Devices, Inc. [AMD] Zeppelin Cryptographic Coprocessor NTBCCP - | +-00.2 Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] - | \-00.3 Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller - +-14.0 Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller - +-14.3 Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge - +-18.0 Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 - +-18.1 Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 - +-18.2 Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 - +-18.3 Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 - +-18.4 Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 - +-18.5 Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 - +-18.6 Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6 - \-18.7 Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 - -Now, how do we read this ? - -``` -+-[10000:00]-+-02.0-[01]----00.0 Intel Corporation NVMe Datacenter SSD [3DNAND, Beta Rock Controller] -| \-03.0-[02]----00.0 Intel Corporation NVMe Datacenter SSD [3DNAND, Beta Rock Controller] -``` - -This is a lot of information, how do we read this ? - -- The first part in brackets (`[10000:00]`) is the domain and the bus. -- The second part (`02.0` is still unclear to me) -- The third number (between brackets) is the device on the bus - -## View a single device - -```sh -lspci -v -s 0000:01:00.0 -: 01:00.0 Non-Volatile memory controller: OCZ Technology Group, Inc. RD400/400A SSD (rev 01) (prog-if 02 [NVM Express]) -: Subsystem: OCZ Technology Group, Inc. RD400/400A SSD -: Flags: bus master, fast devsel, latency 0, IRQ 41, NUMA node 0 -: Memory at ef800000 (64-bit, non-prefetchable) [size=16K] -: Capabilities: <access denied> -: Kernel driver in use: nvme -: Kernel modules: nvme -``` - -## Reading `lspci` output - - $ sudo lspci -vvv -s 0000:01:00.0 - 01:00.0 Non-Volatile memory controller: OCZ Technology Group, Inc. RD400/400A SSD (rev 01) (prog-if 02 [NVM Express]) - Subsystem: OCZ Technology Group, Inc. RD400/400A SSD - Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ - Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- - Latency: 0, Cache Line Size: 64 bytes - Interrupt: pin A routed to IRQ 41 - NUMA node: 0 - Region 0: Memory at ef800000 (64-bit, non-prefetchable) [size=16K] - Capabilities: [40] Power Management version 3 - Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) - Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- - Capabilities: [50] MSI: Enable- Count=1/8 Maskable- 64bit+ - Address: 0000000000000000 Data: 0000 - Capabilities: [70] Express (v2) Endpoint, MSI 00 - DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited - ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W - DevCtl: CorrErr- NonFatalErr- FatalErr- UnsupReq- - RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop- FLReset- - MaxPayload 128 bytes, MaxReadReq 512 bytes - DevSta: CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr+ TransPend- - LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us - ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+ - LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+ - ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- - LnkSta: Speed 8GT/s (ok), Width x4 (ok) - TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- - DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+ - 10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix- - EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit- - FRS- TPHComp- ExtTPHComp- - AtomicOpsCap: 32bit- 64bit- 128bitCAS- - DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled, - AtomicOpsCtl: ReqEn- - LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS- - LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis- - Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- - Compliance De-emphasis: -6dB - LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete+ EqualizationPhase1+ - EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest- - Retimer- 2Retimers- CrosslinkRes: unsupported - Capabilities: [b0] MSI-X: Enable+ Count=8 Masked- - Vector table: BAR=0 offset=00002000 - PBA: BAR=0 offset=00003000 - Capabilities: [100 v2] Advanced Error Reporting - UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol- - UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- - UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- - CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+ - CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr- - AERCap: First Error Pointer: 14, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn- - MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap- - HeaderLog: 05000001 0000010f 02000010 0f86d1a0 - Capabilities: [178 v1] Secondary PCI Express - LnkCtl3: LnkEquIntrruptEn- PerformEqu- - LaneErrStat: 0 - Capabilities: [198 v1] Latency Tolerance Reporting - Max snoop latency: 0ns - Max no snoop latency: 0ns - Capabilities: [1a0 v1] L1 PM Substates - L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1- L1_PM_Substates+ - PortCommonModeRestoreTime=255us PortTPowerOnTime=400us - L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1- - T_CommonMode=0us LTR1.2_Threshold=0ns - L1SubCtl2: T_PwrOn=10us - Kernel driver in use: nvme - Kernel modules: nvme - -A few things to note from this output: - -- **GT/s** is the number of transactions supported (here, 8 billion - transactions / second). This is gen3 controller (gen1 is 2.5 and - gen2 is 5)xs -- **LNKCAP** is the capabilities which were communicated, and - **LNKSTAT** is the current status. You want them to report the same - values. If they don't, you are not using the hardware as it is - intended (here I'm assuming the hardware is intended to work as a - gen3 controller). In case the device is downgraded, the output will - be like this: `LnkSta: Speed 2.5GT/s (downgraded), Width x16 (ok)` -- **width** is the number of lanes that can be used by the device - (here, we can use 4 lanes) -- **MaxPayload** is the maximum size of a PCIe packet - -## Debugging - -PCI configuration registers can be used to debug various PCI bus issues. - -The various registers define bits that are either set (indicated with a -'+') or unset (indicated with a '-'). These bits typically have -attributes of 'RW1C' meaning you can read and write them and need to -write a '1' to clear them. Because these are status bits, if you wanted -to 'count' the occurrences of them you would need to write some software -that detected the bits getting set, incremented counters, and cleared -them over time. - -The 'Device Status Register' (DevSta) shows at a high level if there -have been correctable errors detected (CorrErr), non-fatal errors -detected (UncorrErr), fata errors detected (FataErr), unsupported -requests detected (UnsuppReq), if the device requires auxillary power -(AuxPwr), and if there are transactions pending (non posted requests -that have not been completed). - - 10000:01:00.0 Non-Volatile memory controller: Intel Corporation NVMe Datacenter SSD [3DNAND, Beta Rock Controller] (prog-if 02 [NVM Express]) - ... - Capabilities: [100 v1] Advanced Error Reporting - UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- - UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- - UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- - CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr- - CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ - AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn- - -- The Uncorrectable Error Status (UESta) reports error status of - individual uncorrectable error sources (no bits are set above): - - Data Link Protocol Error (DLP) - - Surprise Down Error (SDES) - - Poisoned TLP (TLP) - - Flow Control Protocol Error (FCP) - - Completion Timeout (CmpltTO) - - Completer Abort (CmpltAbrt) - - Unexpected Completion (UnxCmplt) - - Receiver Overflow (RxOF) - - Malformed TLP (MalfTLP) - - ECRC Error (ECRC) - - Unsupported Request Error (UnsupReq) - - ACS Violation (ACSViol) -- The Uncorrectable Error Mask (UEMsk) controls reporting of - individual errors by the device to the PCIe root complex. A masked - error (bit set) is not recorded or reported. Above shows no errors - are being masked) -- The Uncorrectable Severity controls whether an individual error is - reported as a Non-fatal (clear) or Fatal error (set). -- The Correctable Error Status reports error status of individual - correctable error sources: (no bits are set above) - - Receiver Error (RXErr) - - Bad TLP status (BadTLP) - - Bad DLLP status (BadDLLP) - - Replay Timer Timeout status (Timeout) - - REPLAY NUM Rollover status (Rollover) - - Advisory Non-Fatal Error (NonFatalIErr) -- The Correctable Erro Mask (CEMsk) controls reporting of individual - errors by the device to the PCIe root complex. A masked error (bit - set) is not reported to the RC. Above shows that Advisory Non-Fatal - Errors are being masked - this bit is set by default to enable - compatibility with software that does not comprehend Role-Based - error reporting. -- The Advanced Error Capabilities and Control Register (AERCap) - enables various capabilities (The above indicates the device capable - of generating ECRC errors but they are not enabled): - - First Error Pointer identifies the bit position of the first - error reported in the Uncorrectable Error Status register - - ECRC Generation Capable (GenCap) indicates if set that the - function is capable of generating ECRC - - ECRC Generation Enable (GenEn) indicates if ECRC generation is - enabled (set) - - ECRC Check Capable (ChkCap) indicates if set that the function - is capable of checking ECRC - - ECRC Check Enable (ChkEn) indicates if ECRC checking is enabled - -## Compute Express Link (CXL) - -[Compute Express Link](https://en.wikipedia.org/wiki/Compute_Express_Link) (CXL) is an open standard for high-speed central processing unit (CPU)-to-device and CPU-to-memory connections, designed for high performance data center computers. The standard is built on top of the PCIe physical interface with protocols for I/O, memory, and cache coherence. diff --git a/content/tailscale-docker-https.md b/content/tailscale-docker-https.md index 1094ca6..1b31f62 100644 --- a/content/tailscale-docker-https.md +++ b/content/tailscale-docker-https.md @@ -1,8 +1,6 @@ +++ title = "Tailscale, Docker and HTTPS" date = "2021-12-29" -[taxonomies] -tags = ["containers"] +++ I run a number of services in my home network. For the majority of these services, I don't want to make them available on the internet, I want to only be able to access them when I'm on my home network. However, sometimes I'm not at home and I still want to access them. So far I've been using plain [wireguard](https://www.wireguard.com/) to achieve this. While the initial configuration for wireguard is pretty simple, it starts to be a bit more cumbersome as I add more hosts/containers. It's also not easy to share keys with other folks if I want to give access to some of the machines or services. For that reason I decided to give a look at [tailscale](https://tailscale.com/). diff --git a/content/working-with-go.md b/content/working-with-go.md deleted file mode 100644 index 2a5d7a6..0000000 --- a/content/working-with-go.md +++ /dev/null @@ -1,285 +0,0 @@ -+++ -title = "Working with Go" -date = 2021-08-05 -[taxonomies] -tags = ["go"] -+++ - -_This document assumes go version \>= 1.16_. - -## Go Modules - -[Go modules](https://blog.golang.org/using-go-modules) have been added -in 2019 with Go 1.11. A number of changes were introduced with [Go -1.16](https://blog.golang.org/go116-module-changes). This document is a -reference for me so that I can find answers to things I keep forgetting. - -### Creating a new module - -To create a new module, run `go mod init golang.fcuny.net/m`. This will -create two files: `go.mod` and `go.sum`. - -In the `go.mod` file you'll find: - -- the module import path (prefixed with `module`) -- the list of dependencies (within `require`) -- the version of go to use for the module - -### Versioning - -To bump the version of a module: - -```bash -$ git tag v1.2.3 -$ git push --tags -``` - -Then as a user: - -```bash -$ go get -d golang.fcuny.net/m@v1.2.3 -``` - -### Updating dependencies - -To update the dependencies, run `go mod tidy` - -### Editing a module - -If you need to modify a module, you can check out the module in your -workspace (`git clone <module URL>`). - -Edit the `go.mod` file to add - -```go -replace <module URL> => <path of the local checkout> -``` - -Then modify the code of the module and the next time you compile the -project, the cloned module will be used. - -This is particularly useful when trying to debug an issue with an -external module. - -### Vendor-ing modules - -It's still possible to vendor modules by running `go mod vendor`. This -can be useful in the case of a CI setup that does not have access to -internet. - -### Proxy - -As of version 1.13, the variable `GOPROXY` defaults to -`https://proxy.golang.org,direct` (see -[here](https://github.com/golang/go/blob/c95464f0ea3f87232b1f3937d1b37da6f335f336/src/cmd/go/internal/cfg/cfg.go#L269)). -As a result, when running something like -`go get golang.org/x/tools/gopls@latest`, the request goes through the -proxy. - -There's a number of ways to control the behavior, they are documented -[here](https://golang.org/ref/mod#private-modules). - -There's a few interesting things that can be done when using the proxy. -There's a few special URLs (better documentation -[here](https://golang.org/ref/mod#goproxy-protocol)): - -| path | description | -| --------------------- | ---------------------------------------------------------------------------------------- | -| $mod/@v/list | Returns the list of known versions - there's one version per line and it's in plain text | -| $mod/@v/$version.info | Returns metadata about a version in JSON format | -| $mod/@v/$version.mod | Returns the `go.mod` file for that version | - -For example, looking at the most recent versions for `gopls`: - -```bash -; curl -s -L https://proxy.golang.org/golang.org/x/tools/gopls/@v/list|sort -r|head -v0.7.1-pre.2 -v0.7.1-pre.1 -v0.7.1 -v0.7.0-pre.3 -v0.7.0-pre.2 -v0.7.0-pre.1 -v0.7.0 -v0.6.9-pre.1 -v0.6.9 -v0.6.8-pre.1 -``` - -Let's check the details for the most recent version - -```bash -; curl -s -L https://proxy.golang.org/golang.org/x/tools/gopls/@v/list|sort -r|head -v0.7.1-pre.2 -v0.7.1-pre.1 -v0.7.1 -v0.7.0-pre.3 -v0.7.0-pre.2 -v0.7.0-pre.1 -v0.7.0 -v0.6.9-pre.1 -v0.6.9 -v0.6.8-pre.1 -``` - -And let's look at the content of the `go.mod` for that version too: - -```bash -; curl -s -L https://proxy.golang.org/golang.org/x/tools/gopls/@v/v0.7.1-pre.2.mod -module golang.org/x/tools/gopls - -go 1.17 - -require ( - github.com/BurntSushi/toml v0.3.1 // indirect - github.com/google/go-cmp v0.5.5 - github.com/google/safehtml v0.0.2 // indirect - github.com/jba/templatecheck v0.6.0 - github.com/sanity-io/litter v1.5.0 - github.com/sergi/go-diff v1.1.0 - golang.org/x/mod v0.4.2 - golang.org/x/sync v0.0.0-20210220032951-036812b2e83c // indirect - golang.org/x/sys v0.0.0-20210510120138-977fb7262007 - golang.org/x/text v0.3.6 // indirect - golang.org/x/tools v0.1.6-0.20210802203754-9b21a8868e16 - golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect - honnef.co/go/tools v0.2.0 - mvdan.cc/gofumpt v0.1.1 - mvdan.cc/xurls/v2 v2.2.0 -) -``` - -# Tooling - -### LSP - -`gopls` is the default implementation of the language server protocol -maintained by the Go team. To install the latest version, run -`go install golang.org/x/tools/gopls@latest` - -### `staticcheck` - -[`staticcheck`](https://staticcheck.io/) is a great tool to run against -your code to find issues. To install the latest version, run -`go install honnef.co/go/tools/cmd/staticcheck@latest`. - -## Emacs integration - -### `go-mode` - -[This is the mode](https://github.com/dominikh/go-mode.el) to install to -get syntax highlighting (mostly). - -### Integration with LSP - -Emacs has a pretty good integration with LSP, and ["Eglot for better -programming experience in -Emacs"](https://whatacold.io/blog/2022-01-22-emacs-eglot-lsp/) is a good -starting point. - -#### eglot - -[This is the main mode to install](https://github.com/joaotavora/eglot). - -The configuration is straightforward, this is what I use: - -```lisp -;; for go's LSP I want to use staticcheck and placeholders for completion -(customize-set-variable 'eglot-workspace-configuration - '((:gopls . - ((staticcheck . t) - (matcher . "CaseSensitive") - (usePlaceholders . t))))) - -;; ensure we load eglot for some specific modes -(dolist (hook '(go-mode-hook nix-mode-hook)) - (add-hook hook 'eglot-ensure)) -``` - -`eglot` integrates well with existing modes for Emacs, mainly xref, -flymake, eldoc. - -## Profiling - -### pprof - -[pprof](https://github.com/google/pprof) is a tool to visualize -performance data. Let's start with the following test: - -```go -package main - -import ( - "strings" - "testing" -) - -func BenchmarkStringJoin(b *testing.B) { - input := []string{"a", "b"} - for i := 0; i <= b.N; i++ { - r := strings.Join(input, " ") - if r != "a b" { - b.Errorf("want a b got %s", r) - } - } -} -``` - -Let's run a benchmark with -`go test . -bench=. -cpuprofile cpu_profile.out`: - -```go -goos: linux -goarch: amd64 -pkg: golang.fcuny.net/m -cpu: Intel(R) Core(TM) i3-1005G1 CPU @ 1.20GHz -BenchmarkStringJoin-4 41833486 26.85 ns/op 3 B/op 1 allocs/op -PASS -ok golang.fcuny.net/m 1.327s -``` - -And let's take a look at the profile with -`go tool pprof cpu_profile.out` - -```bash -File: m.test -Type: cpu -Time: Aug 15, 2021 at 3:01pm (PDT) -Duration: 1.31s, Total samples = 1.17s (89.61%) -Entering interactive mode (type "help" for commands, "o" for options) -(pprof) top -Showing nodes accounting for 1100ms, 94.02% of 1170ms total -Showing top 10 nodes out of 41 - flat flat% sum% cum cum% - 240ms 20.51% 20.51% 240ms 20.51% runtime.memmove - 220ms 18.80% 39.32% 320ms 27.35% runtime.mallocgc - 130ms 11.11% 50.43% 450ms 38.46% runtime.makeslice - 110ms 9.40% 59.83% 1150ms 98.29% golang.fcuny.net/m.BenchmarkStringJoin - 110ms 9.40% 69.23% 580ms 49.57% strings.(*Builder).grow (inline) - 110ms 9.40% 78.63% 1040ms 88.89% strings.Join - 70ms 5.98% 84.62% 300ms 25.64% strings.(*Builder).WriteString - 50ms 4.27% 88.89% 630ms 53.85% strings.(*Builder).Grow (inline) - 40ms 3.42% 92.31% 40ms 3.42% runtime.nextFreeFast (inline) - 20ms 1.71% 94.02% 20ms 1.71% runtime.getMCache (inline) -``` - -We can get a breakdown of the data for our module: - -```bash -(pprof) list golang.fcuny.net -Total: 1.17s -ROUTINE ======================== golang.fcuny.net/m.BenchmarkStringJoin in /home/fcuny/workspace/gobench/app_test.go - 110ms 1.15s (flat, cum) 98.29% of Total - . . 5: "testing" - . . 6:) - . . 7: - . . 8:func BenchmarkStringJoin(b *testing.B) { - . . 9: b.ReportAllocs() - 10ms 10ms 10: input := []string{"a", "b"} - . . 11: for i := 0; i <= b.N; i++ { - 20ms 1.06s 12: r := strings.Join(input, " ") - 80ms 80ms 13: if r != "a b" { - . . 14: b.Errorf("want a b got %s", r) - . . 15: } - . . 16: } - . . 17:} -``` diff --git a/content/working-with-nix.md b/content/working-with-nix.md deleted file mode 100644 index 1269963..0000000 --- a/content/working-with-nix.md +++ /dev/null @@ -1,45 +0,0 @@ -+++ -title = "working with nix" -date = 2022-05-10 -[taxonomies] -tags = ["nix"] -+++ - -## the `nix develop` command - -The `nix develop` command is for working on a repository. If our -repository contains a `Makefile`, it will be used by the various -sub-commands. - -`nix develop` supports multiple -[phases](https://nixos.org/manual/nixpkgs/stable/#sec-stdenv-phases) and -they map as follow: - -| phase | default to | command | note | -| -------------- | -------------- | ------------------------- | ---- | -| configurePhase | `./configure` | `nix develop --configure` | | -| buildPhase | `make` | `nix develop --build` | | -| checkPhase | `make check` | `nix develop --check` | | -| installPhase | `make install` | `nix develop --install` | | - -In the repository, running `nix develop --build` will build the binary -**using the Makefile**. This is different from running `nix build`. - -## the `nix build` and `nix run` commands - -### for Go - -For Go, there's the `buildGoModule`. Looking at the -[source](https://github.com/NixOS/nixpkgs/blob/fb7287e6d2d2684520f756639846ee07f6287caa/pkgs/development/go-modules/generic/default.nix) -we can see there's a definition of what will be done for each phases. As -a result, we don't have to define them ourselves. - -If we run `nix build` in the repository, it will run the default [build -phase](https://github.com/NixOS/nixpkgs/blob/fb7287e6d2d2684520f756639846ee07f6287caa/pkgs/development/go-modules/generic/default.nix#L171). - -## `buildInputs` or `nativeBuildInputs` - -- `nativeBuildInputs` is intended for architecture-dependent - build-time-only dependencies -- `buildInputs` is intended for architecture-independent - build-time-only dependencies @@ -1,5 +1,23 @@ { "nodes": { + "devshell": { + "inputs": { + "nixpkgs": "nixpkgs" + }, + "locked": { + "lastModified": 1728330715, + "narHash": "sha256-xRJ2nPOXb//u1jaBnDP56M7v5ldavjbtR6lfGqSvcKg=", + "owner": "numtide", + "repo": "devshell", + "rev": "dd6b80932022cea34a019e2bb32f6fa9e494dfef", + "type": "github" + }, + "original": { + "owner": "numtide", + "repo": "devshell", + "type": "github" + } + }, "flake-compat": { "flake": false, "locked": { @@ -57,16 +75,16 @@ }, "nixpkgs": { "locked": { - "lastModified": 1726943722, - "narHash": "sha256-VEp6qlTh0CW61rfypPnC9KZbEgOb0hoJGjXQsOaNSPE=", - "owner": "nixos", + "lastModified": 1722073938, + "narHash": "sha256-OpX0StkL8vpXyWOGUD6G+MA26wAXK6SpT94kLJXo6B4=", + "owner": "NixOS", "repo": "nixpkgs", - "rev": "46827afca1aac80168f1bf2c9966f87c49339e25", + "rev": "e36e9f57337d0ff0cf77aceb58af4c805472bfae", "type": "github" }, "original": { - "owner": "nixos", - "ref": "master", + "owner": "NixOS", + "ref": "nixpkgs-unstable", "repo": "nixpkgs", "type": "github" } @@ -89,6 +107,22 @@ }, "nixpkgs_2": { "locked": { + "lastModified": 1726943722, + "narHash": "sha256-VEp6qlTh0CW61rfypPnC9KZbEgOb0hoJGjXQsOaNSPE=", + "owner": "nixos", + "repo": "nixpkgs", + "rev": "46827afca1aac80168f1bf2c9966f87c49339e25", + "type": "github" + }, + "original": { + "owner": "nixos", + "ref": "master", + "repo": "nixpkgs", + "type": "github" + } + }, + "nixpkgs_3": { + "locked": { "lastModified": 1719082008, "narHash": "sha256-jHJSUH619zBQ6WdC21fFAlDxHErKVDJ5fpN0Hgx4sjs=", "owner": "NixOS", @@ -103,11 +137,27 @@ "type": "github" } }, + "nixpkgs_4": { + "locked": { + "lastModified": 1733097829, + "narHash": "sha256-9hbb1rqGelllb4kVUCZ307G2k3/UhmA8PPGBoyuWaSw=", + "owner": "nixos", + "repo": "nixpkgs", + "rev": "2c15aa59df0017ca140d9ba302412298ab4bf22a", + "type": "github" + }, + "original": { + "owner": "nixos", + "ref": "nixpkgs-unstable", + "repo": "nixpkgs", + "type": "github" + } + }, "pre-commit-hooks": { "inputs": { "flake-compat": "flake-compat", "gitignore": "gitignore", - "nixpkgs": "nixpkgs_2", + "nixpkgs": "nixpkgs_3", "nixpkgs-stable": "nixpkgs-stable" }, "locked": { @@ -126,9 +176,11 @@ }, "root": { "inputs": { + "devshell": "devshell", "flake-utils": "flake-utils", - "nixpkgs": "nixpkgs", - "pre-commit-hooks": "pre-commit-hooks" + "nixpkgs": "nixpkgs_2", + "pre-commit-hooks": "pre-commit-hooks", + "treefmt-nix": "treefmt-nix" } }, "systems": { @@ -145,6 +197,24 @@ "repo": "default", "type": "github" } + }, + "treefmt-nix": { + "inputs": { + "nixpkgs": "nixpkgs_4" + }, + "locked": { + "lastModified": 1735135567, + "narHash": "sha256-8T3K5amndEavxnludPyfj3Z1IkcFdRpR23q+T0BVeZE=", + "owner": "numtide", + "repo": "treefmt-nix", + "rev": "9e09d30a644c57257715902efbb3adc56c79cf28", + "type": "github" + }, + "original": { + "owner": "numtide", + "repo": "treefmt-nix", + "type": "github" + } } }, "root": "root", @@ -5,6 +5,8 @@ nixpkgs.url = "github:nixos/nixpkgs/master"; flake-utils.url = "github:numtide/flake-utils"; pre-commit-hooks.url = "github:cachix/pre-commit-hooks.nix"; + devshell.url = "github:numtide/devshell"; + treefmt-nix.url = "github:numtide/treefmt-nix"; }; outputs = @@ -13,19 +15,40 @@ nixpkgs, flake-utils, pre-commit-hooks, + devshell, + treefmt-nix, }: flake-utils.lib.eachDefaultSystem ( system: let - pkgs = nixpkgs.legacyPackages.${system}; + pkgs = import nixpkgs { + inherit system; + overlays = [ + devshell.overlays.default + ]; + }; + + treefmt = ( + treefmt-nix.lib.mkWrapper pkgs { + projectRootFile = "flake.nix"; + programs = { + actionlint.enable = true; + deadnix.enable = true; + jsonfmt.enable = true; + just.enable = true; + nixfmt.enable = true; + prettier.enable = true; + taplo.enable = true; + typos.enable = true; + }; + settings.formatter.typos.excludes = [ + "*.jpeg" + "*.jpg" + ]; + } + ); in { - apps = { - default = { - type = "app"; - program = "${self.packages."${system}".zola}/bin/zola"; - }; - }; packages = { default = @@ -42,13 +65,14 @@ buildPhase = '' mkdir -p $out ${pkgs.zola}/bin/zola build -o $out -f - ${pkgs.pandoc}/bin/pandoc --self-contained --css static/css/resume.css \ + ${pkgs.pandoc}/bin/pandoc --self-contained --css static/css/resume.css \ --from markdown --to html --output $out/resume.html resume/resume.md - ${pkgs.pandoc}/bin/pandoc --self-contained --css static/css/resume.css \ + ${pkgs.pandoc}/bin/pandoc --self-contained --css static/css/resume.css \ --from markdown --to pdf --output $out/resume.pdf resume/resume.md ''; dontInstall = true; }; + zola = pkgs.writeShellScriptBin "zola" '' set -euo pipefail export PATH=${ @@ -61,13 +85,26 @@ ''; }; + apps = { + default = { + type = "app"; + program = "${self.packages."${system}".zola}/bin/zola"; + }; + check-links = pkgs.writeShellScriptBin "check-links" '' + ${pkgs.lychee}/bin/lychee --quiet --no-progress --base="${self.packages.default}/public" "${self.packages.default}/public" + ''; + + }; + formatter = treefmt; + checks = { pre-commit-check = pre-commit-hooks.lib.${system}.run { src = ./.; hooks = { - nixfmt-rfc-style.enable = true; - check-toml.enable = true; - check-yaml.enable = true; + treefmt = { + enable = true; + excludes = [ ".*" ]; + }; check-merge-conflicts.enable = true; end-of-file-fixer.enable = true; actionlint.enable = true; @@ -75,9 +112,9 @@ }; }; - devShells.default = pkgs.mkShell { - inherit (self.checks.${system}.pre-commit-check) shellHook; - buildInputs = with pkgs; [ + devShells.default = pkgs.devshell.mkShell { + name = "python-scripts"; + packages = with pkgs; [ zola git treefmt @@ -85,11 +122,15 @@ just taplo nodePackages.prettier - awscli - imagemagick - exiftool treefmt ]; + devshell.startup.pre-commit.text = self.checks.${system}.pre-commit-check.shellHook; + env = [ + { + name = "DEVSHELL_NO_MOTD"; + value = "1"; + } + ]; }; } ); @@ -1,66 +1,22 @@ # Run the local HTTP server run: - zola serve + zola serve # Generate the content of the site under ./docs build: - zola build + nix build # Format files fmt: - treefmt + nix fmt + +check: + nix flake check # Check that all the links are valid check-links: build - lychee ./docs/**/*.html + lychee ./result/**/*.html # Update flake dependencies update-deps: - nix flake update --commit-lock-file - -# Publish the site to https://fcuny.net -publish: fmt verify-gps-removal build check-links - rsync -a docs/ fcuny@fcuny.net:/srv/www/fcuny.net - -# Remove GPS data from JPG, JPEG, and PNG files in the static directory -remove-gps-data: - #!/usr/bin/env bash - set -euo pipefail - echo "Removing GPS data from images in the static directory..." - find ./static -type f \( -iname "*.jpg" -o -iname "*.jpeg" -o -iname "*.png" \) -print0 | \ - while IFS= read -r -d '' file; do - echo "Processing: $file" - if exiftool -GPS*= "$file"; then - if [ -f "${file}_original" ]; then - echo "GPS data removed from $file" - rm "${file}_original" - else - echo "No GPS data found in $file" - fi - else - echo "Error processing $file" - fi - done - echo "GPS data removal process complete." - -# Verify if GPS data has been removed from images in the static directory -verify-gps-removal: - #!/usr/bin/env bash - set -euo pipefail - echo "Verifying GPS data removal in the static directory..." - found_gps=0 - while IFS= read -r -d '' file; do - if exiftool "$file" | grep -q "GPS"; then - echo "WARNING: GPS data found in $file" - found_gps=1 - else - echo "OK: No GPS data in $file" - fi - done < <(find ./static -type f \( -iname "*.jpg" -o -iname "*.jpeg" -o -iname "*.png" \) -print0) - echo "Verification complete." - if [ $found_gps -eq 1 ]; then - echo "ERROR: GPS data found in one or more images in the static directory." - exit 1 - else - echo "SUCCESS: No GPS data found in any images in the static directory." - fi + nix flake update --commit-lock-file diff --git a/static/css/carousel.css b/static/css/carousel.css deleted file mode 100644 index b354591..0000000 --- a/static/css/carousel.css +++ /dev/null @@ -1,48 +0,0 @@ -/* File: carousel.css */ - -.carousel { - width: 100%; - max-width: 46rem; /* As per your specification */ - margin: 0 auto; -} - -.carousel-main-image { - width: 100%; - height: auto; - margin-bottom: 20px; -} - -.carousel-main-image img { - width: 100%; - height: auto; - display: block; -} - -.carousel-vignettes { - display: flex; - justify-content: center; - gap: 10px; - overflow-x: auto; - padding: 10px 0; -} - -.vignette { - cursor: pointer; - transition: opacity 0.3s ease; - flex: 0 0 auto; -} - -.vignette:hover { - opacity: 0.8; -} - -.vignette.active { - border: 2px solid #007bff; -} - -.vignette img { - display: block; - width: 120px; - height: 80px; - object-fit: cover; -} diff --git a/static/css/custom.css b/static/css/custom.css index 57cf620..914db04 100644 --- a/static/css/custom.css +++ b/static/css/custom.css @@ -13,6 +13,7 @@ main { margin: 0 auto; padding: 0 1em; padding-top: 2em; + padding-bottom: 2em; } /* Typography */ @@ -27,9 +28,11 @@ h3 { h1 { font-size: 1.4rem; } + h2 { font-size: 1.3rem; } + h3 { font-size: 1.2rem; } diff --git a/static/css/resume.css b/static/css/resume.css index e3f2d8f..daf322d 100644 --- a/static/css/resume.css +++ b/static/css/resume.css @@ -1,80 +1,80 @@ body { - font-family: sans-serif; - font-size: 1em; - line-height: 1.8em; - color: #0e0e0b; - margin: 1em auto; - padding: 0 0.55em; - max-width: 50rem; + font-family: sans-serif; + font-size: 1em; + line-height: 1.8em; + color: #0e0e0b; + margin: 1em auto; + padding: 0 0.55em; + max-width: 50rem; } h1 { - color: #0e0e0b; - font-size: 1.3rem; + color: #0e0e0b; + font-size: 1.3rem; } h2, h3 { - border-bottom: 1px solid #eee; - font-style: italic; + border-bottom: 1px solid #eee; + font-style: italic; } h2 { - margin-top: 1.25em; - margin-bottom: 0.41em; - font-size: 1.2rem; + margin-top: 1.25em; + margin-bottom: 0.41em; + font-size: 1.2rem; } h3 { - margin-top: 1.5em; - margin-bottom: 0.5em; - font-size: 1rem; + margin-top: 1.5em; + margin-bottom: 0.5em; + font-size: 1rem; } hr { - color: #000111; - background-color: #000111; - border: none; - height: 1px; + color: #000111; + background-color: #000111; + border: none; + height: 1px; } a { - color: #047bc2; - transition: color 0.1s ease-in-out; + color: #047bc2; + transition: color 0.1s ease-in-out; } table { - width: 100%; - border-spacing: 0px; - outline: none; + width: 100%; + border-spacing: 0px; + outline: none; } td { - padding-right: 0.7em; + padding-right: 0.7em; } td:last-child { - text-align: right; + text-align: right; } table, th, td { - font-family: monospace; - color: #000; + font-family: monospace; + color: #000; } #title-block-header { - padding-right: 10px; - font-size: 1.4em; - display: flex; - font-family: monospace; - justify-content: space-between; - align-items: center; - padding-top: 0.5rem; - border-bottom: 1px; + padding-right: 10px; + font-size: 1.4em; + display: flex; + font-family: monospace; + justify-content: space-between; + align-items: center; + padding-top: 0.5rem; + border-bottom: 1px; } #experience { - padding-top: 20px; + padding-top: 20px; } diff --git a/static/images/fogcutter/IMG_0988.jpeg b/static/images/fogcutter/IMG_0988.jpeg Binary files differdeleted file mode 100644 index a63f94c..0000000 --- a/static/images/fogcutter/IMG_0988.jpeg +++ /dev/null diff --git a/static/images/fogcutter/IMG_0989.jpeg b/static/images/fogcutter/IMG_0989.jpeg Binary files differdeleted file mode 100644 index 85dabbd..0000000 --- a/static/images/fogcutter/IMG_0989.jpeg +++ /dev/null diff --git a/static/images/fogcutter/IMG_0990.jpeg b/static/images/fogcutter/IMG_0990.jpeg Binary files differdeleted file mode 100644 index 1f37d65..0000000 --- a/static/images/fogcutter/IMG_0990.jpeg +++ /dev/null diff --git a/static/images/fogcutter/IMG_0991.jpeg b/static/images/fogcutter/IMG_0991.jpeg Binary files differdeleted file mode 100644 index df1ce63..0000000 --- a/static/images/fogcutter/IMG_0991.jpeg +++ /dev/null diff --git a/static/images/fogcutter/IMG_0992.jpeg b/static/images/fogcutter/IMG_0992.jpeg Binary files differdeleted file mode 100644 index 5e25507..0000000 --- a/static/images/fogcutter/IMG_0992.jpeg +++ /dev/null diff --git a/static/images/fogcutter/IMG_0993.jpeg b/static/images/fogcutter/IMG_0993.jpeg Binary files differdeleted file mode 100644 index b361202..0000000 --- a/static/images/fogcutter/IMG_0993.jpeg +++ /dev/null diff --git a/static/images/fogcutter/IMG_0994.jpeg b/static/images/fogcutter/IMG_0994.jpeg Binary files differdeleted file mode 100644 index 8615f82..0000000 --- a/static/images/fogcutter/IMG_0994.jpeg +++ /dev/null diff --git a/static/images/fogcutter/IMG_0995.jpeg b/static/images/fogcutter/IMG_0995.jpeg Binary files differdeleted file mode 100644 index 549f777..0000000 --- a/static/images/fogcutter/IMG_0995.jpeg +++ /dev/null diff --git a/static/images/fogcutter/IMG_0996.jpeg b/static/images/fogcutter/IMG_0996.jpeg Binary files differdeleted file mode 100644 index 5c2da48..0000000 --- a/static/images/fogcutter/IMG_0996.jpeg +++ /dev/null diff --git a/static/images/fogcutter/IMG_0997.jpeg b/static/images/fogcutter/IMG_0997.jpeg Binary files differdeleted file mode 100644 index 1f39c42..0000000 --- a/static/images/fogcutter/IMG_0997.jpeg +++ /dev/null diff --git a/static/images/fogcutter/IMG_0998.jpeg b/static/images/fogcutter/IMG_0998.jpeg Binary files differdeleted file mode 100644 index 77183f1..0000000 --- a/static/images/fogcutter/IMG_0998.jpeg +++ /dev/null diff --git a/static/images/fogcutter/IMG_0999.jpeg b/static/images/fogcutter/IMG_0999.jpeg Binary files differdeleted file mode 100644 index 2cd5b3a..0000000 --- a/static/images/fogcutter/IMG_0999.jpeg +++ /dev/null diff --git a/static/images/fogcutter/IMG_1001.jpeg b/static/images/fogcutter/IMG_1001.jpeg Binary files differdeleted file mode 100644 index ff00a56..0000000 --- a/static/images/fogcutter/IMG_1001.jpeg +++ /dev/null diff --git a/static/images/fogcutter/IMG_1002.jpeg b/static/images/fogcutter/IMG_1002.jpeg Binary files differdeleted file mode 100644 index 3c08ec1..0000000 --- a/static/images/fogcutter/IMG_1002.jpeg +++ /dev/null diff --git a/static/js/carousel.js b/static/js/carousel.js deleted file mode 100644 index f30f934..0000000 --- a/static/js/carousel.js +++ /dev/null @@ -1,122 +0,0 @@ -function createCarousel(images) { - const carousel = document.createElement('div'); - carousel.className = 'carousel'; - let currentIndex = 0; - - // Create main image container - const mainImageContainer = document.createElement('div'); - mainImageContainer.className = 'carousel-main-image'; - carousel.appendChild(mainImageContainer); - - // Create vignette container - const vignetteContainer = document.createElement('div'); - vignetteContainer.className = 'carousel-vignettes'; - carousel.appendChild(vignetteContainer); - - // Function to update the main displayed image - function updateMainImage() { - mainImageContainer.innerHTML = ''; - const img = document.createElement('img'); - img.src = images[currentIndex]; - img.style.width = '100%'; - img.style.height = 'auto'; - mainImageContainer.appendChild(img); - } - - // Function to create vignettes - function createVignettes() { - vignetteContainer.innerHTML = ''; - const containerWidth = carousel.offsetWidth; - const vignetteWidth = 120; - const vignetteHeight = 80; - const vignetteMargin = 10; - const maxVignettes = Math.floor(containerWidth / (vignetteWidth + vignetteMargin)); - - const startIndex = Math.max(0, currentIndex - Math.floor(maxVignettes / 2)); - const endIndex = Math.min(images.length, startIndex + maxVignettes); - - for (let i = startIndex; i < endIndex; i++) { - const vignette = document.createElement('div'); - vignette.className = 'vignette'; - if (i === currentIndex) vignette.classList.add('active'); - - const img = document.createElement('img'); - img.src = images[i]; - img.style.width = vignetteWidth + 'px'; - img.style.height = vignetteHeight + 'px'; - img.style.objectFit = 'cover'; - - vignette.appendChild(img); - vignette.addEventListener('click', () => goToImage(i)); - vignetteContainer.appendChild(vignette); - } - } - - // Function to go to a specific image - function goToImage(index) { - currentIndex = index; - updateMainImage(); - createVignettes(); - } - - // Function to go to the next image - function nextImage() { - currentIndex = (currentIndex + 1) % images.length; - updateMainImage(); - createVignettes(); - } - - // Function to go to the previous image - function prevImage() { - currentIndex = (currentIndex - 1 + images.length) % images.length; - updateMainImage(); - createVignettes(); - } - - // Event listener for keyboard navigation - document.addEventListener('keydown', (e) => { - if (e.key === 'ArrowRight') nextImage(); - if (e.key === 'ArrowLeft') prevImage(); - }); - - // Initialize the carousel - updateMainImage(); - - // Use setTimeout to delay the initial creation of vignettes - setTimeout(() => { - createVignettes(); - }, 0); - - // Recalculate vignettes on window resize - window.addEventListener('resize', createVignettes); - - return carousel; -} - -// Function to initialize the carousel with specific images -function initializeCarousel(containerId, images) { - document.addEventListener('DOMContentLoaded', () => { - const container = document.getElementById(containerId); - if (container) { - const carouselElement = createCarousel(images); - container.appendChild(carouselElement); - - // Use ResizeObserver to detect when the carousel is fully rendered - const resizeObserver = new ResizeObserver(entries => { - for (let entry of entries) { - if (entry.target === carouselElement) { - createVignettes(); - resizeObserver.disconnect(); // Stop observing once vignettes are created - } - } - }); - - resizeObserver.observe(carouselElement); - } else { - console.error(`Container with id "${containerId}" not found.`); - } - }); -} - -// Make the initializeCarousel function globally available -window.initializeCarousel = initializeCarousel; diff --git a/templates/base.html b/templates/base.html index ba2afdb..729abb9 100644 --- a/templates/base.html +++ b/templates/base.html @@ -1,24 +1,28 @@ -<!DOCTYPE HTML> -<html xmlns="http://www.w3.org/1999/xhtml" lang="{{ lang }}" xml:lang="{{ lang }}"> +<!doctype html> +<html + xmlns="http://www.w3.org/1999/xhtml" + lang="{{ lang }}" + xml:lang="{{ lang }}" +> <head> <meta charset="utf-8" /> - <meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no" /> + <meta + name="viewport" + content="width=device-width, initial-scale=1, user-scalable=no" + /> <link rel="canonical" href="{{- current_url|safe -}}" /> <link rel="stylesheet" href="/css/custom.css" /> - <link rel="stylesheet" href="/css/carousel.css" /> <meta name="author" content="{{- config.author -}}" /> - <meta name="description" content="{%- block description -%}{{- config.description -}}{%- endblock description -%}" /> - <link rel="alternate" type="application/atom+xml" title="Blog posts" href="{{ get_url(path="/feed.xml", trailing_slash=false) }}" /> - <script src="/js/carousel.js"></script> + <meta + name="description" + content="{%- block description -%}{{- config.description -}}{%- endblock description -%}" + /> <title>{% block title %}{{- config.title -}}{% endblock title %}</title> </head> <body> - <main> - {% block content %}{% endblock content %} - </main> + <main>{% block content %}{% endblock content %}</main> </body> - </html> diff --git a/templates/bike.html b/templates/bike.html deleted file mode 100644 index db8634a..0000000 --- a/templates/bike.html +++ /dev/null @@ -1,10 +0,0 @@ -{% extends "base.html" %} - -{% block title %}{{ page.title }} - {{ config.title }}{% endblock title %} - -{% block content -%} -<h1>{{- page.title -}}</h1> - -{{ page.content | safe -}} - -{%- endblock content -%} diff --git a/templates/index.html b/templates/index.html index 43b50d1..0425f26 100644 --- a/templates/index.html +++ b/templates/index.html @@ -1,14 +1,15 @@ -{% extends "base.html" %} - -{%- block content -%} - -{% if section.content %} - {{ section.content | safe }} -{% endif %} - -{% set blogtags = get_taxonomy(kind="tags") %} -{% for tag in blogtags.items %} - <a href="{{ get_taxonomy_url(kind="tags", name=tag.name) }}">#{{ tag.name }}</a> -{% endfor %} - -{%- endblock content -%} +{% extends "base.html" %} {%- block content -%} {% if section.content %} {{ +section.content | safe }} {% endif %} + +<hr /> + +{%- for post in section.pages -%} +<section> + <ul class="post-list"> + <li> + <a href="{{- post.path|safe -}}">{{- post.title -}}</a> + <span class="post-date">{{ post.date | date(format="%d %h %Y")}}</span> + </li> + </ul> +</section> +{%- endfor -%} {%- endblock content -%} diff --git a/templates/page.html b/templates/page.html index 94b39ac..e027d8f 100644 --- a/templates/page.html +++ b/templates/page.html @@ -1,17 +1,9 @@ -{% extends "base.html" %} - -{% block title %}{{ page.title }} - {{ config.title }}{% endblock title %} - -{% block content -%} +{% extends "base.html" %} {% block title %}{{ page.title }} - {{ config.title +}}{% endblock title %} {% block content -%} <h1>{{ page.title }}</h1> <div class="metadata"> <span class="date">{{ page.date | date(format="%Y-%m-%d") }}</span> - {%- if page.taxonomies.tags -%} - / {% for tag in page.taxonomies.tags %} - <span class="tag"><a href="{{ get_taxonomy_url(kind="tags", name=tag) }}">#{{ tag }}</a></span> - {% endfor %} - {%- endif -%} </div> {{ page.content | safe -}} diff --git a/templates/resume.html b/templates/resume.html deleted file mode 100644 index db8634a..0000000 --- a/templates/resume.html +++ /dev/null @@ -1,10 +0,0 @@ -{% extends "base.html" %} - -{% block title %}{{ page.title }} - {{ config.title }}{% endblock title %} - -{% block content -%} -<h1>{{- page.title -}}</h1> - -{{ page.content | safe -}} - -{%- endblock content -%} diff --git a/templates/tags/list.html b/templates/tags/list.html deleted file mode 100644 index b7f904a..0000000 --- a/templates/tags/list.html +++ /dev/null @@ -1,12 +0,0 @@ -{% extends "base.html" %} - -{% block content %} -<h1>Tags</h1> -<ul> - {% for term in terms %} - <li> - <a href="{{ term.permalink }}">{{ term.name }}</a> ({{ term.pages | length }} posts) - </li> - {% endfor %} -</ul> -{% endblock content %} diff --git a/templates/tags/single.html b/templates/tags/single.html deleted file mode 100644 index 31432e4..0000000 --- a/templates/tags/single.html +++ /dev/null @@ -1,14 +0,0 @@ -{% extends "base.html" %} - -{% block content %} -<h1>#{{ term.name }}</h1> -<section> -<ul class="post-list"> -{% for page in term.pages %} - <li> - <a href="{{ page.permalink }}">{{ page.title }}</a> - <span class="post-date">{{ page.date | date(format="%d %h %Y")}}</span> - </li> -{% endfor %} -</ul> -{% endblock content %} diff --git a/treefmt.toml b/treefmt.toml index 418b759..8dd14ff 100644 --- a/treefmt.toml +++ b/treefmt.toml @@ -1,4 +1,4 @@ -[formatter.nix] +[formatter.nixfmt] command = "nixfmt" includes = ["*.nix"] |
