Been using postgres for years. Build a database with a few million records, stuff starts to break . . .
CREATE TABLE public.power_monitor ( pwm_sensor integer NOT NULL, pwm_stamp timestamp with time zone NOT NULL, pwm_volts numeric(6,1) NOT NULL, pwm_amps numeric(7,4) NOT NULL, pwm_watts numeric(7,1) NOT NULL, pwm_energy numeric(14,0) NOT NULL, pwm_frequency numeric(4,1) NOT NULL, pwm_power_factor numeric(5,2) NOT NULL, pwm_alarm numeric(2,0) NOT NULL ); CREATE INDEX power_monitor_idx2 ON public.power_monitor USING btree (pwm_stamp, pwm_sensor); CREATE INDEX power_monitor_pwm_stamp_idx ON public.power_monitor USING btree (pwm_stamp DESC);
Add 4 million records. Things start to blow up, (Queries that return over 2M rows, aggregate queries with group by to collate the data) in particular on the ARM (Orange Pi) but even on some older AMD hardware.
Problem is apparently jit. Turn it off, return to sanity…
jit_above_cost = -1 # 100000 perform JIT compilation if available jit_inline_above_cost = -1 # 500000 # inline small functions if query is jit_optimize_above_cost = -1 # 500000 # use expensive JIT optimizations if