diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2025-07-30 08:58:55 -0700 |
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2025-07-30 08:58:55 -0700 |
| commit | 8be4d31cb8aaeea27bde4b7ddb26e28a89062ebf (patch) | |
| tree | fec3039a08284cd87f4ec9c3bea5b5a439f1859f /drivers/ptp | |
| parent | 4b290aae788e06561754b28c6842e4080957d3f7 (diff) | |
| parent | fa582ca7e187a15e772e6a72fe035f649b387a60 (diff) | |
Merge tag 'net-next-6.17' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Jakub Kicinski:
"Core & protocols:
- Wrap datapath globals into net_aligned_data, to avoid false sharing
- Preserve MSG_ZEROCOPY in forwarding (e.g. out of a container)
- Add SO_INQ and SCM_INQ support to AF_UNIX
- Add SIOCINQ support to AF_VSOCK
- Add TCP_MAXSEG sockopt to MPTCP
- Add IPv6 force_forwarding sysctl to enable forwarding per interface
- Make TCP validation of whether packet fully fits in the receive
window and the rcv_buf more strict. With increased use of HW
aggregation a single "packet" can be multiple 100s of kB
- Add MSG_MORE flag to optimize large TCP transmissions via sockmap,
improves latency up to 33% for sockmap users
- Convert TCP send queue handling from tasklet to BH workque
- Improve BPF iteration over TCP sockets to see each socket exactly
once
- Remove obsolete and unused TCP RFC3517/RFC6675 loss recovery code
- Support enabling kernel threads for NAPI processing on per-NAPI
instance basis rather than a whole device. Fully stop the kernel
NAPI thread when threaded NAPI gets disabled. Previously thread
would stick around until ifdown due to tricky synchronization
- Allow multicast routing to take effect on locally-generated packets
- Add output interface argument for End.X in segment routing
- MCTP: add support for gateway routing, improve bind() handling
- Don't require rtnl_lock when fetching an IPv6 neighbor over Netlink
- Add a new neighbor flag ("extern_valid"), which cedes refresh
responsibilities to userspace. This is needed for EVPN multi-homing
where a neighbor entry for a multi-homed host needs to be synced
across all the VTEPs among which the host is multi-homed
- Support NUD_PERMANENT for proxy neighbor entries
- Add a new queuing discipline for IETF RFC9332 DualQ Coupled AQM
- Add sequence numbers to netconsole messages. Unregister
netconsole's console when all net targets are removed. Code
refactoring. Add a number of selftests
- Align IPSec inbound SA lookup to RFC 4301. Only SPI and protocol
should be used for an inbound SA lookup
- Support inspecting ref_tracker state via DebugFS
- Don't force bonding advertisement frames tx to ~333 ms boundaries.
Add broadcast_neighbor option to send ARP/ND on all bonded links
- Allow providing upcall pid for the 'execute' command in openvswitch
- Remove DCCP support from Netfilter's conntrack
- Disallow multiple packet duplications in the queuing layer
- Prevent use of deprecated iptables code on PREEMPT_RT
Driver API:
- Support RSS and hashing configuration over ethtool Netlink
- Add dedicated ethtool callbacks for getting and setting hashing
fields
- Add support for power budget evaluation strategy in PSE /
Power-over-Ethernet. Generate Netlink events for overcurrent etc
- Support DPLL phase offset monitoring across all device inputs.
Support providing clock reference and SYNC over separate DPLL
inputs
- Support traffic classes in devlink rate API for bandwidth
management
- Remove rtnl_lock dependency from UDP tunnel port configuration
Device drivers:
- Add a new Broadcom driver for 800G Ethernet (bnge)
- Add a standalone driver for Microchip ZL3073x DPLL
- Remove IBM's NETIUCV device driver
- Ethernet high-speed NICs:
- Broadcom (bnxt):
- support zero-copy Tx of DMABUF memory
- take page size into account for page pool recycling rings
- Intel (100G, ice, idpf):
- idpf: XDP and AF_XDP support preparations
- idpf: add flow steering
- add link_down_events statistic
- clean up the TSPLL code
- preparations for live VM migration
- nVidia/Mellanox:
- support zero-copy Rx/Tx interfaces (DMABUF and io_uring)
- optimize context memory usage for matchers
- expose serial numbers in devlink info
- support PCIe congestion metrics
- Meta (fbnic):
- add 25G, 50G, and 100G link modes to phylink
- support dumping FW logs
- Marvell/Cavium:
- support for CN20K generation of the Octeon chips
- Amazon:
- add HW clock (without timestamping, just hypervisor time access)
- Ethernet virtual:
- VirtIO net:
- support segmentation of UDP-tunnel-encapsulated packets
- Google (gve):
- support packet timestamping and clock synchronization
- Microsoft vNIC:
- add handler for device-originated servicing events
- allow dynamic MSI-X vector allocation
- support Tx bandwidth clamping
- Ethernet NICs consumer, and embedded:
- AMD:
- amd-xgbe: hardware timestamping and PTP clock support
- Broadcom integrated MACs (bcmgenet, bcmasp):
- use napi_complete_done() return value to support NAPI polling
- add support for re-starting auto-negotiation
- Broadcom switches (b53):
- support BCM5325 switches
- add bcm63xx EPHY power control
- Synopsys (stmmac):
- lots of code refactoring and cleanups
- TI:
- icssg-prueth: read firmware-names from device tree
- icssg: PRP offload support
- Microchip:
- lan78xx: convert to PHYLINK for improved PHY and MAC management
- ksz: add KSZ8463 switch support
- Intel:
- support similar queue priority scheme in multi-queue and
time-sensitive networking (taprio)
- support packet pre-emption in both
- RealTek (r8169):
- enable EEE at 5Gbps on RTL8126
- Airoha:
- add PPPoE offload support
- MDIO bus controller for Airoha AN7583
- Ethernet PHYs:
- support for the IPQ5018 internal GE PHY
- micrel KSZ9477 switch-integrated PHYs:
- add MDI/MDI-X control support
- add RX error counters
- add cable test support
- add Signal Quality Indicator (SQI) reporting
- dp83tg720: improve reset handling and reduce link recovery time
- support bcm54811 (and its MII-Lite interface type)
- air_en8811h: support resume/suspend
- support PHY counters for QCA807x and QCA808x
- support WoL for QCA807x
- CAN drivers:
- rcar_canfd: support for Transceiver Delay Compensation
- kvaser: report FW versions via devlink dev info
- WiFi:
- extended regulatory info support (6 GHz)
- add statistics and beacon monitor for Multi-Link Operation (MLO)
- support S1G aggregation, improve S1G support
- add Radio Measurement action fields
- support per-radio RTS threshold
- some work around how FIPS affects wifi, which was wrong (RC4 is
used by TKIP, not only WEP)
- improvements for unsolicited probe response handling
- WiFi drivers:
- RealTek (rtw88):
- IBSS mode for SDIO devices
- RealTek (rtw89):
- BT coexistence for MLO/WiFi7
- concurrent station + P2P support
- support for USB devices RTL8851BU/RTL8852BU
- Intel (iwlwifi):
- use embedded PNVM in (to be released) FW images to fix
compatibility issues
- many cleanups (unused FW APIs, PCIe code, WoWLAN)
- some FIPS interoperability
- MediaTek (mt76):
- firmware recovery improvements
- more MLO work
- Qualcomm/Atheros (ath12k):
- fix scan on multi-radio devices
- more EHT/Wi-Fi 7 features
- encapsulation/decapsulation offload
- Broadcom (brcm80211):
- support SDIO 43751 device
- Bluetooth:
- hci_event: add support for handling LE BIG Sync Lost event
- ISO: add socket option to report packet seqnum via CMSG
- ISO: support SCM_TIMESTAMPING for ISO TS
- Bluetooth drivers:
- intel_pcie: support Function Level Reset
- nxpuart: add support for 4M baudrate
- nxpuart: implement powerup sequence, reset, FW dump, and FW loading"
* tag 'net-next-6.17' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1742 commits)
dpll: zl3073x: Fix build failure
selftests: bpf: fix legacy netfilter options
ipv6: annotate data-races around rt->fib6_nsiblings
ipv6: fix possible infinite loop in fib6_info_uses_dev()
ipv6: prevent infinite loop in rt6_nlmsg_size()
ipv6: add a retry logic in net6_rt_notify()
vrf: Drop existing dst reference in vrf_ip6_input_dst
net/sched: taprio: align entry index attr validation with mqprio
net: fsl_pq_mdio: use dev_err_probe
selftests: rtnetlink.sh: remove esp4_offload after test
vsock: remove unnecessary null check in vsock_getname()
igb: xsk: solve negative overflow of nb_pkts in zerocopy mode
stmmac: xsk: fix negative overflow of budget in zerocopy mode
dt-bindings: ieee802154: Convert at86rf230.txt yaml format
net: dsa: microchip: Disable PTP function of KSZ8463
net: dsa: microchip: Setup fiber ports for KSZ8463
net: dsa: microchip: Write switch MAC address differently for KSZ8463
net: dsa: microchip: Use different registers for KSZ8463
net: dsa: microchip: Add KSZ8463 switch support to KSZ DSA driver
dt-bindings: net: dsa: microchip: Add KSZ8463 switch support
...
Diffstat (limited to 'drivers/ptp')
| -rw-r--r-- | drivers/ptp/ptp_chardev.c | 748 | ||||
| -rw-r--r-- | drivers/ptp/ptp_clock.c | 2 |
2 files changed, 356 insertions, 394 deletions
diff --git a/drivers/ptp/ptp_chardev.c b/drivers/ptp/ptp_chardev.c index 4bf421765d03..4ca5a464a46a 100644 --- a/drivers/ptp/ptp_chardev.c +++ b/drivers/ptp/ptp_chardev.c @@ -106,11 +106,9 @@ int ptp_set_pinfunc(struct ptp_clock *ptp, unsigned int pin, int ptp_open(struct posix_clock_context *pccontext, fmode_t fmode) { - struct ptp_clock *ptp = - container_of(pccontext->clk, struct ptp_clock, clock); + struct ptp_clock *ptp = container_of(pccontext->clk, struct ptp_clock, clock); struct timestamp_event_queue *queue; char debugfsname[32]; - unsigned long flags; queue = kzalloc(sizeof(*queue), GFP_KERNEL); if (!queue) @@ -122,9 +120,8 @@ int ptp_open(struct posix_clock_context *pccontext, fmode_t fmode) } bitmap_set(queue->mask, 0, PTP_MAX_CHANNELS); spin_lock_init(&queue->lock); - spin_lock_irqsave(&ptp->tsevqs_lock, flags); - list_add_tail(&queue->qlist, &ptp->tsevqs); - spin_unlock_irqrestore(&ptp->tsevqs_lock, flags); + scoped_guard(spinlock_irq, &ptp->tsevqs_lock) + list_add_tail(&queue->qlist, &ptp->tsevqs); pccontext->private_clkdata = queue; /* Debugfs contents */ @@ -143,402 +140,392 @@ int ptp_open(struct posix_clock_context *pccontext, fmode_t fmode) int ptp_release(struct posix_clock_context *pccontext) { struct timestamp_event_queue *queue = pccontext->private_clkdata; - unsigned long flags; struct ptp_clock *ptp = container_of(pccontext->clk, struct ptp_clock, clock); debugfs_remove(queue->debugfs_instance); pccontext->private_clkdata = NULL; - spin_lock_irqsave(&ptp->tsevqs_lock, flags); - list_del(&queue->qlist); - spin_unlock_irqrestore(&ptp->tsevqs_lock, flags); + scoped_guard(spinlock_irq, &ptp->tsevqs_lock) + list_del(&queue->qlist); bitmap_free(queue->mask); kfree(queue); return 0; } -long ptp_ioctl(struct posix_clock_context *pccontext, unsigned int cmd, - unsigned long arg) +static long ptp_clock_getcaps(struct ptp_clock *ptp, void __user *arg) +{ + struct ptp_clock_caps caps = { + .max_adj = ptp->info->max_adj, + .n_alarm = ptp->info->n_alarm, + .n_ext_ts = ptp->info->n_ext_ts, + .n_per_out = ptp->info->n_per_out, + .pps = ptp->info->pps, + .n_pins = ptp->info->n_pins, + .cross_timestamping = ptp->info->getcrosststamp != NULL, + .adjust_phase = ptp->info->adjphase != NULL && + ptp->info->getmaxphase != NULL, + }; + + if (caps.adjust_phase) + caps.max_phase_adj = ptp->info->getmaxphase(ptp->info); + + return copy_to_user(arg, &caps, sizeof(caps)) ? -EFAULT : 0; +} + +static long ptp_extts_request(struct ptp_clock *ptp, unsigned int cmd, void __user *arg) +{ + struct ptp_clock_request req = { .type = PTP_CLK_REQ_EXTTS }; + struct ptp_clock_info *ops = ptp->info; + unsigned int supported_extts_flags; + + if (copy_from_user(&req.extts, arg, sizeof(req.extts))) + return -EFAULT; + + if (cmd == PTP_EXTTS_REQUEST2) { + /* Tell the drivers to check the flags carefully. */ + req.extts.flags |= PTP_STRICT_FLAGS; + /* Make sure no reserved bit is set. */ + if ((req.extts.flags & ~PTP_EXTTS_VALID_FLAGS) || + req.extts.rsv[0] || req.extts.rsv[1]) + return -EINVAL; + + /* Ensure one of the rising/falling edge bits is set. */ + if ((req.extts.flags & PTP_ENABLE_FEATURE) && + (req.extts.flags & PTP_EXTTS_EDGES) == 0) + return -EINVAL; + } else { + req.extts.flags &= PTP_EXTTS_V1_VALID_FLAGS; + memset(req.extts.rsv, 0, sizeof(req.extts.rsv)); + } + + if (req.extts.index >= ops->n_ext_ts) + return -EINVAL; + + supported_extts_flags = ptp->info->supported_extts_flags; + /* The PTP_ENABLE_FEATURE flag is always supported. */ + supported_extts_flags |= PTP_ENABLE_FEATURE; + /* If the driver does not support strictly checking flags, the + * PTP_RISING_EDGE and PTP_FALLING_EDGE flags are merely hints + * which are not enforced. + */ + if (!(supported_extts_flags & PTP_STRICT_FLAGS)) + supported_extts_flags |= PTP_EXTTS_EDGES; + /* Reject unsupported flags */ + if (req.extts.flags & ~supported_extts_flags) + return -EOPNOTSUPP; + + scoped_cond_guard(mutex_intr, return -ERESTARTSYS, &ptp->pincfg_mux) + return ops->enable(ops, &req, req.extts.flags & PTP_ENABLE_FEATURE ? 1 : 0); +} + +static long ptp_perout_request(struct ptp_clock *ptp, unsigned int cmd, void __user *arg) +{ + struct ptp_clock_request req = { .type = PTP_CLK_REQ_PEROUT }; + struct ptp_perout_request *perout = &req.perout; + struct ptp_clock_info *ops = ptp->info; + + if (copy_from_user(perout, arg, sizeof(*perout))) + return -EFAULT; + + if (cmd == PTP_PEROUT_REQUEST2) { + if (perout->flags & ~PTP_PEROUT_VALID_FLAGS) + return -EINVAL; + + /* + * The "on" field has undefined meaning if + * PTP_PEROUT_DUTY_CYCLE isn't set, we must still treat it + * as reserved, which must be set to zero. + */ + if (!(perout->flags & PTP_PEROUT_DUTY_CYCLE) && + !mem_is_zero(perout->rsv, sizeof(perout->rsv))) + return -EINVAL; + + if (perout->flags & PTP_PEROUT_DUTY_CYCLE) { + /* The duty cycle must be subunitary. */ + if (perout->on.sec > perout->period.sec || + (perout->on.sec == perout->period.sec && + perout->on.nsec > perout->period.nsec)) + return -ERANGE; + } + + if (perout->flags & PTP_PEROUT_PHASE) { + /* + * The phase should be specified modulo the period, + * therefore anything equal or larger than 1 period + * is invalid. + */ + if (perout->phase.sec > perout->period.sec || + (perout->phase.sec == perout->period.sec && + perout->phase.nsec >= perout->period.nsec)) + return -ERANGE; + } + } else { + perout->flags &= PTP_PEROUT_V1_VALID_FLAGS; + memset(perout->rsv, 0, sizeof(perout->rsv)); + } + + if (perout->index >= ops->n_per_out) + return -EINVAL; + if (perout->flags & ~ops->supported_perout_flags) + return -EOPNOTSUPP; + + scoped_cond_guard(mutex_intr, return -ERESTARTSYS, &ptp->pincfg_mux) + return ops->enable(ops, &req, perout->period.sec || perout->period.nsec); +} + +static long ptp_enable_pps(struct ptp_clock *ptp, bool enable) +{ + struct ptp_clock_request req = { .type = PTP_CLK_REQ_PPS }; + struct ptp_clock_info *ops = ptp->info; + + if (!capable(CAP_SYS_TIME)) + return -EPERM; + + scoped_cond_guard(mutex_intr, return -ERESTARTSYS, &ptp->pincfg_mux) + return ops->enable(ops, &req, enable); +} + +static long ptp_sys_offset_precise(struct ptp_clock *ptp, void __user *arg) { - struct ptp_clock *ptp = - container_of(pccontext->clk, struct ptp_clock, clock); - unsigned int i, pin_index, supported_extts_flags; - struct ptp_sys_offset_extended *extoff = NULL; struct ptp_sys_offset_precise precise_offset; struct system_device_crosststamp xtstamp; - struct ptp_clock_info *ops = ptp->info; - struct ptp_sys_offset *sysoff = NULL; - struct timestamp_event_queue *tsevq; + struct timespec64 ts; + int err; + + if (!ptp->info->getcrosststamp) + return -EOPNOTSUPP; + + err = ptp->info->getcrosststamp(ptp->info, &xtstamp); + if (err) + return err; + + memset(&precise_offset, 0, sizeof(precise_offset)); + ts = ktime_to_timespec64(xtstamp.device); + precise_offset.device.sec = ts.tv_sec; + precise_offset.device.nsec = ts.tv_nsec; + ts = ktime_to_timespec64(xtstamp.sys_realtime); + precise_offset.sys_realtime.sec = ts.tv_sec; + precise_offset.sys_realtime.nsec = ts.tv_nsec; + ts = ktime_to_timespec64(xtstamp.sys_monoraw); + precise_offset.sys_monoraw.sec = ts.tv_sec; + precise_offset.sys_monoraw.nsec = ts.tv_nsec; + + return copy_to_user(arg, &precise_offset, sizeof(precise_offset)) ? -EFAULT : 0; +} + +static long ptp_sys_offset_extended(struct ptp_clock *ptp, void __user *arg) +{ + struct ptp_sys_offset_extended *extoff __free(kfree) = NULL; struct ptp_system_timestamp sts; - struct ptp_clock_request req; - struct ptp_clock_caps caps; + + if (!ptp->info->gettimex64) + return -EOPNOTSUPP; + + extoff = memdup_user(arg, sizeof(*extoff)); + if (IS_ERR(extoff)) + return PTR_ERR(extoff); + + if (extoff->n_samples > PTP_MAX_SAMPLES || extoff->rsv[0] || extoff->rsv[1]) + return -EINVAL; + + switch (extoff->clockid) { + case CLOCK_REALTIME: + case CLOCK_MONOTONIC: + case CLOCK_MONOTONIC_RAW: + break; + case CLOCK_AUX ... CLOCK_AUX_LAST: + if (IS_ENABLED(CONFIG_POSIX_AUX_CLOCKS)) + break; + fallthrough; + default: + return -EINVAL; + } + + sts.clockid = extoff->clockid; + for (unsigned int i = 0; i < extoff->n_samples; i++) { + struct timespec64 ts; + int err; + + err = ptp->info->gettimex64(ptp->info, &ts, &sts); + if (err) + return err; + + /* Filter out disabled or unavailable clocks */ + if (sts.pre_ts.tv_sec < 0 || sts.post_ts.tv_sec < 0) + return -EINVAL; + + extoff->ts[i][0].sec = sts.pre_ts.tv_sec; + extoff->ts[i][0].nsec = sts.pre_ts.tv_nsec; + extoff->ts[i][1].sec = ts.tv_sec; + extoff->ts[i][1].nsec = ts.tv_nsec; + extoff->ts[i][2].sec = sts.post_ts.tv_sec; + extoff->ts[i][2].nsec = sts.post_ts.tv_nsec; + } + + return copy_to_user(arg, extoff, sizeof(*extoff)) ? -EFAULT : 0; +} + +static long ptp_sys_offset(struct ptp_clock *ptp, void __user *arg) +{ + struct ptp_sys_offset *sysoff __free(kfree) = NULL; struct ptp_clock_time *pct; - struct ptp_pin_desc pd; struct timespec64 ts; - int enable, err = 0; + + sysoff = memdup_user(arg, sizeof(*sysoff)); + if (IS_ERR(sysoff)) + return PTR_ERR(sysoff); + + if (sysoff->n_samples > PTP_MAX_SAMPLES) + return -EINVAL; + + pct = &sysoff->ts[0]; + for (unsigned int i = 0; i < sysoff->n_samples; i++) { + struct ptp_clock_info *ops = ptp->info; + int err; + + ktime_get_real_ts64(&ts); + pct->sec = ts.tv_sec; + pct->nsec = ts.tv_nsec; + pct++; + if (ops->gettimex64) + err = ops->gettimex64(ops, &ts, NULL); + else + err = ops->gettime64(ops, &ts); + if (err) + return err; + pct->sec = ts.tv_sec; + pct->nsec = ts.tv_nsec; + pct++; + } + ktime_get_real_ts64(&ts); + pct->sec = ts.tv_sec; + pct->nsec = ts.tv_nsec; + + return copy_to_user(arg, sysoff, sizeof(*sysoff)) ? -EFAULT : 0; +} + +static long ptp_pin_getfunc(struct ptp_clock *ptp, unsigned int cmd, void __user *arg) +{ + struct ptp_clock_info *ops = ptp->info; + struct ptp_pin_desc pd; + + if (copy_from_user(&pd, arg, sizeof(pd))) + return -EFAULT; + + if (cmd == PTP_PIN_GETFUNC2 && !mem_is_zero(pd.rsv, sizeof(pd.rsv))) + return -EINVAL; + + if (pd.index >= ops->n_pins) + return -EINVAL; + + scoped_cond_guard(mutex_intr, return -ERESTARTSYS, &ptp->pincfg_mux) + pd = ops->pin_config[array_index_nospec(pd.index, ops->n_pins)]; + + return copy_to_user(arg, &pd, sizeof(pd)) ? -EFAULT : 0; +} + +static long ptp_pin_setfunc(struct ptp_clock *ptp, unsigned int cmd, void __user *arg) +{ + struct ptp_clock_info *ops = ptp->info; + struct ptp_pin_desc pd; + unsigned int pin_index; + + if (copy_from_user(&pd, arg, sizeof(pd))) + return -EFAULT; + + if (cmd == PTP_PIN_SETFUNC2 && !mem_is_zero(pd.rsv, sizeof(pd.rsv))) + return -EINVAL; + + if (pd.index >= ops->n_pins) + return -EINVAL; + + pin_index = array_index_nospec(pd.index, ops->n_pins); + scoped_cond_guard(mutex_intr, return -ERESTARTSYS, &ptp->pincfg_mux) + return ptp_set_pinfunc(ptp, pin_index, pd.func, pd.chan); +} + +static long ptp_mask_clear_all(struct timestamp_event_queue *tsevq) +{ + bitmap_clear(tsevq->mask, 0, PTP_MAX_CHANNELS); + return 0; +} + +static long ptp_mask_en_single(struct timestamp_event_queue *tsevq, void __user *arg) +{ + unsigned int channel; + + if (copy_from_user(&channel, arg, sizeof(channel))) + return -EFAULT; + if (channel >= PTP_MAX_CHANNELS) + return -EFAULT; + set_bit(channel, tsevq->mask); + return 0; +} + +long ptp_ioctl(struct posix_clock_context *pccontext, unsigned int cmd, + unsigned long arg) +{ + struct ptp_clock *ptp = container_of(pccontext->clk, struct ptp_clock, clock); + void __user *argptr; if (in_compat_syscall() && cmd != PTP_ENABLE_PPS && cmd != PTP_ENABLE_PPS2) arg = (unsigned long)compat_ptr(arg); - - tsevq = pccontext->private_clkdata; + argptr = (void __force __user *)arg; switch (cmd) { - case PTP_CLOCK_GETCAPS: case PTP_CLOCK_GETCAPS2: - memset(&caps, 0, sizeof(caps)); - - caps.max_adj = ptp->info->max_adj; - caps.n_alarm = ptp->info->n_alarm; - caps.n_ext_ts = ptp->info->n_ext_ts; - caps.n_per_out = ptp->info->n_per_out; - caps.pps = ptp->info->pps; - caps.n_pins = ptp->info->n_pins; - caps.cross_timestamping = ptp->info->getcrosststamp != NULL; - caps.adjust_phase = ptp->info->adjphase != NULL && - ptp->info->getmaxphase != NULL; - if (caps.adjust_phase) - caps.max_phase_adj = ptp->info->getmaxphase(ptp->info); - if (copy_to_user((void __user *)arg, &caps, sizeof(caps))) - err = -EFAULT; - break; + return ptp_clock_getcaps(ptp, argptr); case PTP_EXTTS_REQUEST: case PTP_EXTTS_REQUEST2: - if ((pccontext->fp->f_mode & FMODE_WRITE) == 0) { - err = -EACCES; - break; - } - memset(&req, 0, sizeof(req)); - - if (copy_from_user(&req.extts, (void __user *)arg, - sizeof(req.extts))) { - err = -EFAULT; - break; - } - if (cmd == PTP_EXTTS_REQUEST2) { - /* Tell the drivers to check the flags carefully. */ - req.extts.flags |= PTP_STRICT_FLAGS; - /* Make sure no reserved bit is set. */ - if ((req.extts.flags & ~PTP_EXTTS_VALID_FLAGS) || - req.extts.rsv[0] || req.extts.rsv[1]) { - err = -EINVAL; - break; - } - /* Ensure one of the rising/falling edge bits is set. */ - if ((req.extts.flags & PTP_ENABLE_FEATURE) && - (req.extts.flags & PTP_EXTTS_EDGES) == 0) { - err = -EINVAL; - break; - } - } else if (cmd == PTP_EXTTS_REQUEST) { - req.extts.flags &= PTP_EXTTS_V1_VALID_FLAGS; - req.extts.rsv[0] = 0; - req.extts.rsv[1] = 0; - } - if (req.extts.index >= ops->n_ext_ts) { - err = -EINVAL; - break; - } - supported_extts_flags = ptp->info->supported_extts_flags; - /* The PTP_ENABLE_FEATURE flag is always supported. */ - supported_extts_flags |= PTP_ENABLE_FEATURE; - /* If the driver does not support strictly checking flags, the - * PTP_RISING_EDGE and PTP_FALLING_EDGE flags are merely - * hints which are not enforced. - */ - if (!(supported_extts_flags & PTP_STRICT_FLAGS)) - supported_extts_flags |= PTP_EXTTS_EDGES; - /* Reject unsupported flags */ - if (req.extts.flags & ~supported_extts_flags) - return -EOPNOTSUPP; - req.type = PTP_CLK_REQ_EXTTS; - enable = req.extts.flags & PTP_ENABLE_FEATURE ? 1 : 0; - if (mutex_lock_interruptible(&ptp->pincfg_mux)) - return -ERESTARTSYS; - err = ops->enable(ops, &req, enable); - mutex_unlock(&ptp->pincfg_mux); - break; + if ((pccontext->fp->f_mode & FMODE_WRITE) == 0) + return -EACCES; + return ptp_extts_request(ptp, cmd, argptr); case PTP_PEROUT_REQUEST: case PTP_PEROUT_REQUEST2: - if ((pccontext->fp->f_mode & FMODE_WRITE) == 0) { - err = -EACCES; - break; - } - memset(&req, 0, sizeof(req)); - - if (copy_from_user(&req.perout, (void __user *)arg, - sizeof(req.perout))) { - err = -EFAULT; - break; - } - if (cmd == PTP_PEROUT_REQUEST2) { - struct ptp_perout_request *perout = &req.perout; - - if (perout->flags & ~PTP_PEROUT_VALID_FLAGS) { - err = -EINVAL; - break; - } - /* - * The "on" field has undefined meaning if - * PTP_PEROUT_DUTY_CYCLE isn't set, we must still treat - * it as reserved, which must be set to zero. - */ - if (!(perout->flags & PTP_PEROUT_DUTY_CYCLE) && - (perout->rsv[0] || perout->rsv[1] || - perout->rsv[2] || perout->rsv[3])) { - err = -EINVAL; - break; - } - if (perout->flags & PTP_PEROUT_DUTY_CYCLE) { - /* The duty cycle must be subunitary. */ - if (perout->on.sec > perout->period.sec || - (perout->on.sec == perout->period.sec && - perout->on.nsec > perout->period.nsec)) { - err = -ERANGE; - break; - } - } - if (perout->flags & PTP_PEROUT_PHASE) { - /* - * The phase should be specified modulo the - * period, therefore anything equal or larger - * than 1 period is invalid. - */ - if (perout->phase.sec > perout->period.sec || - (perout->phase.sec == perout->period.sec && - perout->phase.nsec >= perout->period.nsec)) { - err = -ERANGE; - break; - } - } - } else if (cmd == PTP_PEROUT_REQUEST) { - req.perout.flags &= PTP_PEROUT_V1_VALID_FLAGS; - req.perout.rsv[0] = 0; - req.perout.rsv[1] = 0; - req.perout.rsv[2] = 0; - req.perout.rsv[3] = 0; - } - if (req.perout.index >= ops->n_per_out) { - err = -EINVAL; - break; - } - if (req.perout.flags & ~ptp->info->supported_perout_flags) - return -EOPNOTSUPP; - req.type = PTP_CLK_REQ_PEROUT; - enable = req.perout.period.sec || req.perout.period.nsec; - if (mutex_lock_interruptible(&ptp->pincfg_mux)) - return -ERESTARTSYS; - err = ops->enable(ops, &req, enable); - mutex_unlock(&ptp->pincfg_mux); - break; + if ((pccontext->fp->f_mode & FMODE_WRITE) == 0) + return -EACCES; + return ptp_perout_request(ptp, cmd, argptr); case PTP_ENABLE_PPS: case PTP_ENABLE_PPS2: - if ((pccontext->fp->f_mode & FMODE_WRITE) == 0) { - err = -EACCES; - break; - } - memset(&req, 0, sizeof(req)); - - if (!capable(CAP_SYS_TIME)) - return -EPERM; - req.type = PTP_CLK_REQ_PPS; - enable = arg ? 1 : 0; - if (mutex_lock_interruptible(&ptp->pincfg_mux)) - return -ERESTARTSYS; - err = ops->enable(ops, &req, enable); - mutex_unlock(&ptp->pincfg_mux); - break; + if ((pccontext->fp->f_mode & FMODE_WRITE) == 0) + return -EACCES; + return ptp_enable_pps(ptp, !!arg); case PTP_SYS_OFFSET_PRECISE: case PTP_SYS_OFFSET_PRECISE2: - if (!ptp->info->getcrosststamp) { - err = -EOPNOTSUPP; - break; - } - err = ptp->info->getcrosststamp(ptp->info, &xtstamp); - if (err) - break; - - memset(&precise_offset, 0, sizeof(precise_offset)); - ts = ktime_to_timespec64(xtstamp.device); - precise_offset.device.sec = ts.tv_sec; - precise_offset.device.nsec = ts.tv_nsec; - ts = ktime_to_timespec64(xtstamp.sys_realtime); - precise_offset.sys_realtime.sec = ts.tv_sec; - precise_offset.sys_realtime.nsec = ts.tv_nsec; - ts = ktime_to_timespec64(xtstamp.sys_monoraw); - precise_offset.sys_monoraw.sec = ts.tv_sec; - precise_offset.sys_monoraw.nsec = ts.tv_nsec; - if (copy_to_user((void __user *)arg, &precise_offset, - sizeof(precise_offset))) - err = -EFAULT; - break; + return ptp_sys_offset_precise(ptp, argptr); case PTP_SYS_OFFSET_EXTENDED: case PTP_SYS_OFFSET_EXTENDED2: - if (!ptp->info->gettimex64) { - err = -EOPNOTSUPP; - break; - } - extoff = memdup_user((void __user *)arg, sizeof(*extoff)); - if (IS_ERR(extoff)) { - err = PTR_ERR(extoff); - extoff = NULL; - break; - } - if (extoff->n_samples > PTP_MAX_SAMPLES || - extoff->rsv[0] || extoff->rsv[1] || - (extoff->clockid != CLOCK_REALTIME && - extoff->clockid != CLOCK_MONOTONIC && - extoff->clockid != CLOCK_MONOTONIC_RAW)) { - err = -EINVAL; - break; - } - sts.clockid = extoff->clockid; - for (i = 0; i < extoff->n_samples; i++) { - err = ptp->info->gettimex64(ptp->info, &ts, &sts); - if (err) - goto out; - extoff->ts[i][0].sec = sts.pre_ts.tv_sec; - extoff->ts[i][0].nsec = sts.pre_ts.tv_nsec; - extoff->ts[i][1].sec = ts.tv_sec; - extoff->ts[i][1].nsec = ts.tv_nsec; - extoff->ts[i][2].sec = sts.post_ts.tv_sec; - extoff->ts[i][2].nsec = sts.post_ts.tv_nsec; - } - if (copy_to_user((void __user *)arg, extoff, sizeof(*extoff))) - err = -EFAULT; - break; + return ptp_sys_offset_extended(ptp, argptr); case PTP_SYS_OFFSET: case PTP_SYS_OFFSET2: - sysoff = memdup_user((void __user *)arg, sizeof(*sysoff)); - if (IS_ERR(sysoff)) { - err = PTR_ERR(sysoff); - sysoff = NULL; - break; - } - if (sysoff->n_samples > PTP_MAX_SAMPLES) { - err = -EINVAL; - break; - } - pct = &sysoff->ts[0]; - for (i = 0; i < sysoff->n_samples; i++) { - ktime_get_real_ts64(&ts); - pct->sec = ts.tv_sec; - pct->nsec = ts.tv_nsec; - pct++; - if (ops->gettimex64) - err = ops->gettimex64(ops, &ts, NULL); - else - err = ops->gettime64(ops, &ts); - if (err) - goto out; - pct->sec = ts.tv_sec; - pct->nsec = ts.tv_nsec; - pct++; - } - ktime_get_real_ts64(&ts); - pct->sec = ts.tv_sec; - pct->nsec = ts.tv_nsec; - if (copy_to_user((void __user *)arg, sysoff, sizeof(*sysoff))) - err = -EFAULT; - break; + return ptp_sys_offset(ptp, argptr); case PTP_PIN_GETFUNC: case PTP_PIN_GETFUNC2: - if (copy_from_user(&pd, (void __user *)arg, sizeof(pd))) { - err = -EFAULT; - break; - } - if ((pd.rsv[0] || pd.rsv[1] || pd.rsv[2] - || pd.rsv[3] || pd.rsv[4]) - && cmd == PTP_PIN_GETFUNC2) { - err = -EINVAL; - break; - } else if (cmd == PTP_PIN_GETFUNC) { - pd.rsv[0] = 0; - pd.rsv[1] = 0; - pd.rsv[2] = 0; - pd.rsv[3] = 0; - pd.rsv[4] = 0; - } - pin_index = pd.index; - if (pin_index >= ops->n_pins) { - err = -EINVAL; - break; - } - pin_index = array_index_nospec(pin_index, ops->n_pins); - if (mutex_lock_interruptible(&ptp->pincfg_mux)) - return -ERESTARTSYS; - pd = ops->pin_config[pin_index]; - mutex_unlock(&ptp->pincfg_mux); - if (!err && copy_to_user((void __user *)arg, &pd, sizeof(pd))) - err = -EFAULT; - break; + return ptp_pin_getfunc(ptp, cmd, argptr); case PTP_PIN_SETFUNC: case PTP_PIN_SETFUNC2: - if ((pccontext->fp->f_mode & FMODE_WRITE) == 0) { - err = -EACCES; - break; - } - if (copy_from_user(&pd, (void __user *)arg, sizeof(pd))) { - err = -EFAULT; - break; - } - if ((pd.rsv[0] || pd.rsv[1] || pd.rsv[2] - || pd.rsv[3] || pd.rsv[4]) - && cmd == PTP_PIN_SETFUNC2) { - err = -EINVAL; - break; - } else if (cmd == PTP_PIN_SETFUNC) { - pd.rsv[0] = 0; - pd.rsv[1] = 0; - pd.rsv[2] = 0; - pd.rsv[3] = 0; - pd.rsv[4] = 0; - } - pin_index = pd.index; - if (pin_index >= ops->n_pins) { - err = -EINVAL; - break; - } - pin_index = array_index_nospec(pin_index, ops->n_pins); - if (mutex_lock_interruptible(&ptp->pincfg_mux)) - return -ERESTARTSYS; - err = ptp_set_pinfunc(ptp, pin_index, pd.func, pd.chan); - mutex_unlock(&ptp->pincfg_mux); - break; + if ((pccontext->fp->f_mode & FMODE_WRITE) == 0) + return -EACCES; + return ptp_pin_setfunc(ptp, cmd, argptr); case PTP_MASK_CLEAR_ALL: - bitmap_clear(tsevq->mask, 0, PTP_MAX_CHANNELS); - break; + return ptp_mask_clear_all(pccontext->private_clkdata); case PTP_MASK_EN_SINGLE: - if (copy_from_user(&i, (void __user *)arg, sizeof(i))) { - err = -EFAULT; - break; - } - if (i >= PTP_MAX_CHANNELS) { - err = -EFAULT; - break; - } - set_bit(i, tsevq->mask); - break; + return ptp_mask_en_single(pccontext->private_clkdata, argptr); default: - err = -ENOTTY; - break; + return -ENOTTY; } - -out: - kfree(extoff); - kfree(sysoff); - return err; } __poll_t ptp_poll(struct posix_clock_context *pccontext, struct file *fp, @@ -562,71 +549,46 @@ __poll_t ptp_poll(struct posix_clock_context *pccontext, struct file *fp, ssize_t ptp_read(struct posix_clock_context *pccontext, uint rdflags, char __user *buf, size_t cnt) { - struct ptp_clock *ptp = - container_of(pccontext->clk, struct ptp_clock, clock); + struct ptp_clock *ptp = container_of(pccontext->clk, struct ptp_clock, clock); struct timestamp_event_queue *queue; struct ptp_extts_event *event; - unsigned long flags; - size_t qcnt, i; - int result; + ssize_t result; queue = pccontext->private_clkdata; - if (!queue) { - result = -EINVAL; - goto exit; - } + if (!queue) + return -EINVAL; - if (cnt % sizeof(struct ptp_extts_event) != 0) { - result = -EINVAL; - goto exit; - } + if (cnt % sizeof(*event) != 0) + return -EINVAL; if (cnt > EXTTS_BUFSIZE) cnt = EXTTS_BUFSIZE; - cnt = cnt / sizeof(struct ptp_extts_event); - - if (wait_event_interruptible(ptp->tsev_wq, - ptp->defunct || queue_cnt(queue))) { + if (wait_event_interruptible(ptp->tsev_wq, ptp->defunct || queue_cnt(queue))) return -ERESTARTSYS; - } - if (ptp->defunct) { - result = -ENODEV; - goto exit; - } + if (ptp->defunct) + return -ENODEV; event = kmalloc(EXTTS_BUFSIZE, GFP_KERNEL); - if (!event) { - result = -ENOMEM; - goto exit; - } - - spin_lock_irqsave(&queue->lock, flags); + if (!event) + return -ENOMEM; - qcnt = queue_cnt(queue); + scoped_guard(spinlock_irq, &queue->lock) { + size_t qcnt = min((size_t)queue_cnt(queue), cnt / sizeof(*event)); - if (cnt > qcnt) - cnt = qcnt; - - for (i = 0; i < cnt; i++) { - event[i] = queue->buf[queue->head]; - /* Paired with READ_ONCE() in queue_cnt() */ - WRITE_ONCE(queue->head, (queue->head + 1) % PTP_MAX_TIMESTAMPS); + for (size_t i = 0; i < qcnt; i++) { + event[i] = queue->buf[queue->head]; + /* Paired with READ_ONCE() in queue_cnt() */ + WRITE_ONCE(queue->head, (queue->head + 1) % PTP_MAX_TIMESTAMPS); + } + cnt = qcnt * sizeof(*event); } - spin_unlock_irqrestore(&queue->lock, flags); - - cnt = cnt * sizeof(struct ptp_extts_event); - result = cnt; - if (copy_to_user(buf, event, cnt)) { + if (copy_to_user(buf, event, cnt)) result = -EFAULT; - goto free_event; - } -free_event: kfree(event); -exit: return result; } diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c index 36f57d7b4a66..1cc06b7cb17e 100644 --- a/drivers/ptp/ptp_clock.c +++ b/drivers/ptp/ptp_clock.c @@ -96,7 +96,7 @@ static int ptp_clock_settime(struct posix_clock *pc, const struct timespec64 *tp struct ptp_clock *ptp = container_of(pc, struct ptp_clock, clock); if (ptp_clock_freerun(ptp)) { - pr_err("ptp: physical clock is free running\n"); + pr_err_ratelimited("ptp: physical clock is free running\n"); return -EBUSY; } |
