1

Merge tag 'drm-misc-next-2023-09-11-1' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for v6.7-rc1:

UAPI Changes:
- Nouveau changed to not set NO_PREFETCH flag explicitly.

Cross-subsystem Changes:
- Update documentation of dma-buf intro and uapi.
- fbdev/sbus fixes.
- Use initializer macros in a lot of fbdev drivers.
- Add Boris Brezillon as Panfrost driver maintainer.
- Add Jessica Zhang as drm/panel reviewer.
- Make more fbdev drivers use fb_ops helpers for deferred io.
- Small hid trailing whitespace fix.
- Use fb_ops in hid/picolcd

Core Changes:
- Assorted small fixes to ttm tests, drm/mst.
- Documentation updates to bridge.
- Add kunit tests for some drm_fb functions.
- Rework drm_debugfs implementation.
- Update xe documentation to mark todos as completed.

Driver Changes:
- Add support to rockchip for rv1126 mipi-dsi and vop.
- Assorted small fixes to nouveau, bridge/samsung-dsim,
  bridge/lvds-codec, loongson, rockchip, panfrost, gma500, repaper,
  komeda, virtio, ssd130x.
- Add support for simple panels Mitsubishi AA084XE01,
  JDI LPM102A188A,
- Documentation updates to accel/ivpu.
- Some nouveau scheduling/fence fixes.
- Power management related fixes and other fixes to ivpu.
- Assorted bridge/it66121 fixes.
- Make platform drivers return void in remove() callback.

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/3da6554b-3b47-fe7d-c4ea-21f4f819dbb6@linux.intel.com
This commit is contained in:
Dave Airlie 2023-09-22 16:28:29 +10:00
commit f107ff76a8
125 changed files with 3038 additions and 1289 deletions

View File

@ -0,0 +1,94 @@
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/panel/jdi,lpm102a188a.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: JDI LPM102A188A 2560x1800 10.2" DSI Panel
maintainers:
- Diogo Ivo <diogo.ivo@tecnico.ulisboa.pt>
description: |
This panel requires a dual-channel DSI host to operate. It supports two modes:
- left-right: each channel drives the left or right half of the screen
- even-odd: each channel drives the even or odd lines of the screen
Each of the DSI channels controls a separate DSI peripheral. The peripheral
driven by the first link (DSI-LINK1) is considered the primary peripheral
and controls the device. The 'link2' property contains a phandle to the
peripheral driven by the second link (DSI-LINK2).
allOf:
- $ref: panel-common.yaml#
properties:
compatible:
const: jdi,lpm102a188a
reg: true
enable-gpios: true
reset-gpios: true
power-supply: true
backlight: true
ddi-supply:
description: The regulator that provides IOVCC (1.8V).
link2:
$ref: /schemas/types.yaml#/definitions/phandle
description: |
phandle to the DSI peripheral on the secondary link. Note that the
presence of this property marks the containing node as DSI-LINK1.
required:
- compatible
- reg
if:
required:
- link2
then:
required:
- power-supply
- ddi-supply
- enable-gpios
- reset-gpios
additionalProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/gpio/tegra-gpio.h>
dsia: dsi@54300000 {
#address-cells = <1>;
#size-cells = <0>;
reg = <0x0 0x54300000 0x0 0x00040000>;
link2: panel@0 {
compatible = "jdi,lpm102a188a";
reg = <0>;
};
};
dsib: dsi@54400000{
#address-cells = <1>;
#size-cells = <0>;
reg = <0x0 0x54400000 0x0 0x00040000>;
nvidia,ganged-mode = <&dsia>;
link1: panel@0 {
compatible = "jdi,lpm102a188a";
reg = <0>;
power-supply = <&pplcd_vdd>;
ddi-supply = <&pp1800_lcdio>;
enable-gpios = <&gpio TEGRA_GPIO(V, 1) GPIO_ACTIVE_HIGH>;
reset-gpios = <&gpio TEGRA_GPIO(V, 2) GPIO_ACTIVE_LOW>;
link2 = <&link2>;
backlight = <&backlight>;
};
};
...

View File

@ -238,6 +238,8 @@ properties:
- logictechno,lttd800480070-l6wh-rt
# Mitsubishi "AA070MC01 7.0" WVGA TFT LCD panel
- mitsubishi,aa070mc01-ca1
# Mitsubishi AA084XE01 8.4" XGA TFT LCD panel
- mitsubishi,aa084xe01
# Multi-Inno Technology Co.,Ltd MI0700S4T-6 7" 800x480 TFT Resistive Touch Module
- multi-inno,mi0700s4t-6
# Multi-Inno Technology Co.,Ltd MI0800FT-9 8" 800x600 TFT Resistive Touch Module

View File

@ -18,6 +18,7 @@ properties:
- rockchip,rk3288-mipi-dsi
- rockchip,rk3399-mipi-dsi
- rockchip,rk3568-mipi-dsi
- rockchip,rv1126-mipi-dsi
- const: snps,dw-mipi-dsi
interrupts:
@ -77,6 +78,7 @@ allOf:
enum:
- rockchip,px30-mipi-dsi
- rockchip,rk3568-mipi-dsi
- rockchip,rv1126-mipi-dsi
then:
properties:

View File

@ -31,6 +31,7 @@ properties:
- rockchip,rk3368-vop
- rockchip,rk3399-vop-big
- rockchip,rk3399-vop-lit
- rockchip,rv1126-vop
reg:
minItems: 1

View File

@ -5,14 +5,30 @@ The dma-buf subsystem provides the framework for sharing buffers for
hardware (DMA) access across multiple device drivers and subsystems, and
for synchronizing asynchronous hardware access.
This is used, for example, by drm "prime" multi-GPU support, but is of
course not limited to GPU use cases.
As an example, it is used extensively by the DRM subsystem to exchange
buffers between processes, contexts, library APIs within the same
process, and also to exchange buffers with other subsystems such as
V4L2.
This document describes the way in which kernel subsystems can use and
interact with the three main primitives offered by dma-buf:
- dma-buf, representing a sg_table and exposed to userspace as a file
descriptor to allow passing between processes, subsystems, devices,
etc;
- dma-fence, providing a mechanism to signal when an asynchronous
hardware operation has completed; and
- dma-resv, which manages a set of dma-fences for a particular dma-buf
allowing implicit (kernel-ordered) synchronization of work to
preserve the illusion of coherent access
Userspace API principles and use
--------------------------------
For more details on how to design your subsystem's API for dma-buf use, please
see Documentation/userspace-api/dma-buf-alloc-exchange.rst.
The three main components of this are: (1) dma-buf, representing a
sg_table and exposed to userspace as a file descriptor to allow passing
between devices, (2) fence, which provides a mechanism to signal when
one device has finished access, and (3) reservation, which manages the
shared or exclusive fence(s) associated with the buffer.
Shared DMA Buffers
------------------

View File

@ -486,3 +486,10 @@ and the CRTC index is its position in this array.
.. kernel-doc:: include/uapi/drm/drm_mode.h
:internal:
dma-buf interoperability
========================
Please see Documentation/userspace-api/dma-buf-alloc-exchange.rst for
information on how dma-buf is integrated and exposed within DRM.

View File

@ -67,14 +67,8 @@ platforms.
When the time comes for Xe, the protection will be lifted on Xe and kept in i915.
Xe driver will be protected with both STAGING Kconfig and force_probe. Changes in
the uAPI are expected while the driver is behind these protections. STAGING will
be removed when the driver uAPI gets to a mature state where we can guarantee the
no regression rule. Then force_probe will be lifted only for future platforms
that will be productized with Xe driver, but not with i915.
Xe Pre-Merge Goals
====================
Xe Pre-Merge Goals - Work-in-Progress
=======================================
Drm_scheduler
-------------
@ -94,41 +88,6 @@ depend on any other patch touching drm_scheduler itself that was not yet merged
through drm-misc. This, by itself, already includes the reach of an agreement for
uniform 1 to 1 relationship implementation / usage across drivers.
GPU VA
------
Two main goals of Xe are meeting together here:
1) Have an uAPI that aligns with modern UMD needs.
2) Early upstream engagement.
RedHat engineers working on Nouveau proposed a new DRM feature to handle keeping
track of GPU virtual address mappings. This is still not merged upstream, but
this aligns very well with our goals and with our VM_BIND. The engagement with
upstream and the port of Xe towards GPUVA is already ongoing.
As a key measurable result, Xe needs to be aligned with the GPU VA and working in
our tree. Missing Nouveau patches should *not* block Xe and any needed GPUVA
related patch should be independent and present on dri-devel or acked by
maintainers to go along with the first Xe pull request towards drm-next.
DRM_VM_BIND
-----------
Nouveau, and Xe are all implementing VM_BIND and new Exec uAPIs in order to
fulfill the needs of the modern uAPI. Xe merge should *not* be blocked on the
development of a common new drm_infrastructure. However, the Xe team needs to
engage with the community to explore the options of a common API.
As a key measurable result, the DRM_VM_BIND needs to be documented in this file
below, or this entire block deleted if the consensus is for independent drivers
vm_bind ioctls.
Although having a common DRM level IOCTL for VM_BIND is not a requirement to get
Xe merged, it is mandatory to enforce the overall locking scheme for all major
structs and list (so vm and vma). So, a consensus is needed, and possibly some
common helpers. If helpers are needed, they should be also documented in this
document.
ASYNC VM_BIND
-------------
Although having a common DRM level IOCTL for VM_BIND is not a requirement to get
@ -212,6 +171,14 @@ This item ties into the GPUVA, VM_BIND, and even long-running compute support.
As a key measurable result, we need to have a community consensus documented in
this document and the Xe driver prepared for the changes, if necessary.
Xe uAPI high level overview
=============================
...Warning: To be done in follow up patches after/when/where the main consensus in various items are individually reached.
Xe Pre-Merge Goals - Completed
================================
Dev_coredump
------------
@ -229,7 +196,37 @@ infrastructure with overall possible improvements, like multiple file support
for better organization of the dumps, snapshot support, dmesg extra print,
and whatever may make sense and help the overall infrastructure.
Xe uAPI high level overview
=============================
DRM_VM_BIND
-----------
Nouveau, and Xe are all implementing VM_BIND and new Exec uAPIs in order to
fulfill the needs of the modern uAPI. Xe merge should *not* be blocked on the
development of a common new drm_infrastructure. However, the Xe team needs to
engage with the community to explore the options of a common API.
...Warning: To be done in follow up patches after/when/where the main consensus in various items are individually reached.
As a key measurable result, the DRM_VM_BIND needs to be documented in this file
below, or this entire block deleted if the consensus is for independent drivers
vm_bind ioctls.
Although having a common DRM level IOCTL for VM_BIND is not a requirement to get
Xe merged, it is mandatory to enforce the overall locking scheme for all major
structs and list (so vm and vma). So, a consensus is needed, and possibly some
common helpers. If helpers are needed, they should be also documented in this
document.
GPU VA
------
Two main goals of Xe are meeting together here:
1) Have an uAPI that aligns with modern UMD needs.
2) Early upstream engagement.
RedHat engineers working on Nouveau proposed a new DRM feature to handle keeping
track of GPU virtual address mappings. This is still not merged upstream, but
this aligns very well with our goals and with our VM_BIND. The engagement with
upstream and the port of Xe towards GPUVA is already ongoing.
As a key measurable result, Xe needs to be aligned with the GPU VA and working in
our tree. Missing Nouveau patches should *not* block Xe and any needed GPUVA
related patch should be independent and present on dri-devel or acked by
maintainers to go along with the first Xe pull request towards drm-next.

View File

@ -0,0 +1,389 @@
.. SPDX-License-Identifier: GPL-2.0
.. Copyright 2021-2023 Collabora Ltd.
========================
Exchanging pixel buffers
========================
As originally designed, the Linux graphics subsystem had extremely limited
support for sharing pixel-buffer allocations between processes, devices, and
subsystems. Modern systems require extensive integration between all three
classes; this document details how applications and kernel subsystems should
approach this sharing for two-dimensional image data.
It is written with reference to the DRM subsystem for GPU and display devices,
V4L2 for media devices, and also to Vulkan, EGL and Wayland, for userspace
support, however any other subsystems should also follow this design and advice.
Glossary of terms
=================
.. glossary::
image:
Conceptually a two-dimensional array of pixels. The pixels may be stored
in one or more memory buffers. Has width and height in pixels, pixel
format and modifier (implicit or explicit).
row:
A span along a single y-axis value, e.g. from co-ordinates (0,100) to
(200,100).
scanline:
Synonym for row.
column:
A span along a single x-axis value, e.g. from co-ordinates (100,0) to
(100,100).
memory buffer:
A piece of memory for storing (parts of) pixel data. Has stride and size
in bytes and at least one handle in some API. May contain one or more
planes.
plane:
A two-dimensional array of some or all of an image's color and alpha
channel values.
pixel:
A picture element. Has a single color value which is defined by one or
more color channels values, e.g. R, G and B, or Y, Cb and Cr. May also
have an alpha value as an additional channel.
pixel data:
Bytes or bits that represent some or all of the color/alpha channel values
of a pixel or an image. The data for one pixel may be spread over several
planes or memory buffers depending on format and modifier.
color value:
A tuple of numbers, representing a color. Each element in the tuple is a
color channel value.
color channel:
One of the dimensions in a color model. For example, RGB model has
channels R, G, and B. Alpha channel is sometimes counted as a color
channel as well.
pixel format:
A description of how pixel data represents the pixel's color and alpha
values.
modifier:
A description of how pixel data is laid out in memory buffers.
alpha:
A value that denotes the color coverage in a pixel. Sometimes used for
translucency instead.
stride:
A value that denotes the relationship between pixel-location co-ordinates
and byte-offset values. Typically used as the byte offset between two
pixels at the start of vertically-consecutive tiling blocks. For linear
layouts, the byte offset between two vertically-adjacent pixels. For
non-linear formats the stride must be computed in a consistent way, which
usually is done as-if the layout was linear.
pitch:
Synonym for stride.
Formats and modifiers
=====================
Each buffer must have an underlying format. This format describes the color
values provided for each pixel. Although each subsystem has its own format
descriptions (e.g. V4L2 and fbdev), the ``DRM_FORMAT_*`` tokens should be reused
wherever possible, as they are the standard descriptions used for interchange.
These tokens are described in the ``drm_fourcc.h`` file, which is a part of
DRM's uAPI.
Each ``DRM_FORMAT_*`` token describes the translation between a pixel
co-ordinate in an image, and the color values for that pixel contained within
its memory buffers. The number and type of color channels are described:
whether they are RGB or YUV, integer or floating-point, the size of each channel
and their locations within the pixel memory, and the relationship between color
planes.
For example, ``DRM_FORMAT_ARGB8888`` describes a format in which each pixel has
a single 32-bit value in memory. Alpha, red, green, and blue, color channels are
available at 8-bit precision per channel, ordered respectively from most to
least significant bits in little-endian storage. ``DRM_FORMAT_*`` is not
affected by either CPU or device endianness; the byte pattern in memory is
always as described in the format definition, which is usually little-endian.
As a more complex example, ``DRM_FORMAT_NV12`` describes a format in which luma
and chroma YUV samples are stored in separate planes, where the chroma plane is
stored at half the resolution in both dimensions (i.e. one U/V chroma
sample is stored for each 2x2 pixel grouping).
Format modifiers describe a translation mechanism between these per-pixel memory
samples, and the actual memory storage for the buffer. The most straightforward
modifier is ``DRM_FORMAT_MOD_LINEAR``, describing a scheme in which each plane
is laid out row-sequentially, from the top-left to the bottom-right corner.
This is considered the baseline interchange format, and most convenient for CPU
access.
Modern hardware employs much more sophisticated access mechanisms, typically
making use of tiled access and possibly also compression. For example, the
``DRM_FORMAT_MOD_VIVANTE_TILED`` modifier describes memory storage where pixels
are stored in 4x4 blocks arranged in row-major ordering, i.e. the first tile in
a plane stores pixels (0,0) to (3,3) inclusive, and the second tile in a plane
stores pixels (4,0) to (7,3) inclusive.
Some modifiers may modify the number of planes required for an image; for
example, the ``I915_FORMAT_MOD_Y_TILED_CCS`` modifier adds a second plane to RGB
formats in which it stores data about the status of every tile, notably
including whether the tile is fully populated with pixel data, or can be
expanded from a single solid color.
These extended layouts are highly vendor-specific, and even specific to
particular generations or configurations of devices per-vendor. For this reason,
support of modifiers must be explicitly enumerated and negotiated by all users
in order to ensure a compatible and optimal pipeline, as discussed below.
Dimensions and size
===================
Each pixel buffer must be accompanied by logical pixel dimensions. This refers
to the number of unique samples which can be extracted from, or stored to, the
underlying memory storage. For example, even though a 1920x1080
``DRM_FORMAT_NV12`` buffer has a luma plane containing 1920x1080 samples for the Y
component, and 960x540 samples for the U and V components, the overall buffer is
still described as having dimensions of 1920x1080.
The in-memory storage of a buffer is not guaranteed to begin immediately at the
base address of the underlying memory, nor is it guaranteed that the memory
storage is tightly clipped to either dimension.
Each plane must therefore be described with an ``offset`` in bytes, which will be
added to the base address of the memory storage before performing any per-pixel
calculations. This may be used to combine multiple planes into a single memory
buffer; for example, ``DRM_FORMAT_NV12`` may be stored in a single memory buffer
where the luma plane's storage begins immediately at the start of the buffer
with an offset of 0, and the chroma plane's storage follows within the same buffer
beginning from the byte offset for that plane.
Each plane must also have a ``stride`` in bytes, expressing the offset in memory
between two contiguous row. For example, a ``DRM_FORMAT_MOD_LINEAR`` buffer
with dimensions of 1000x1000 may have been allocated as if it were 1024x1000, in
order to allow for aligned access patterns. In this case, the buffer will still
be described with a width of 1000, however the stride will be ``1024 * bpp``,
indicating that there are 24 pixels at the positive extreme of the x axis whose
values are not significant.
Buffers may also be padded further in the y dimension, simply by allocating a
larger area than would ordinarily be required. For example, many media decoders
are not able to natively output buffers of height 1080, but instead require an
effective height of 1088 pixels. In this case, the buffer continues to be
described as having a height of 1080, with the memory allocation for each buffer
being increased to account for the extra padding.
Enumeration
===========
Every user of pixel buffers must be able to enumerate a set of supported formats
and modifiers, described together. Within KMS, this is achieved with the
``IN_FORMATS`` property on each DRM plane, listing the supported DRM formats, and
the modifiers supported for each format. In userspace, this is supported through
the `EGL_EXT_image_dma_buf_import_modifiers`_ extension entrypoints for EGL, the
`VK_EXT_image_drm_format_modifier`_ extension for Vulkan, and the
`zwp_linux_dmabuf_v1`_ extension for Wayland.
Each of these interfaces allows users to query a set of supported
format+modifier combinations.
Negotiation
===========
It is the responsibility of userspace to negotiate an acceptable format+modifier
combination for its usage. This is performed through a simple intersection of
lists. For example, if a user wants to use Vulkan to render an image to be
displayed on a KMS plane, it must:
- query KMS for the ``IN_FORMATS`` property for the given plane
- query Vulkan for the supported formats for its physical device, making sure
to pass the ``VkImageUsageFlagBits`` and ``VkImageCreateFlagBits``
corresponding to the intended rendering use
- intersect these formats to determine the most appropriate one
- for this format, intersect the lists of supported modifiers for both KMS and
Vulkan, to obtain a final list of acceptable modifiers for that format
This intersection must be performed for all usages. For example, if the user
also wishes to encode the image to a video stream, it must query the media API
it intends to use for encoding for the set of modifiers it supports, and
additionally intersect against this list.
If the intersection of all lists is an empty list, it is not possible to share
buffers in this way, and an alternate strategy must be considered (e.g. using
CPU access routines to copy data between the different uses, with the
corresponding performance cost).
The resulting modifier list is unsorted; the order is not significant.
Allocation
==========
Once userspace has determined an appropriate format, and corresponding list of
acceptable modifiers, it must allocate the buffer. As there is no universal
buffer-allocation interface available at either kernel or userspace level, the
client makes an arbitrary choice of allocation interface such as Vulkan, GBM, or
a media API.
Each allocation request must take, at a minimum: the pixel format, a list of
acceptable modifiers, and the buffer's width and height. Each API may extend
this set of properties in different ways, such as allowing allocation in more
than two dimensions, intended usage patterns, etc.
The component which allocates the buffer will make an arbitrary choice of what
it considers the 'best' modifier within the acceptable list for the requested
allocation, any padding required, and further properties of the underlying
memory buffers such as whether they are stored in system or device-specific
memory, whether or not they are physically contiguous, and their cache mode.
These properties of the memory buffer are not visible to userspace, however the
``dma-heaps`` API is an effort to address this.
After allocation, the client must query the allocator to determine the actual
modifier selected for the buffer, as well as the per-plane offset and stride.
Allocators are not permitted to vary the format in use, to select a modifier not
provided within the acceptable list, nor to vary the pixel dimensions other than
the padding expressed through offset, stride, and size.
Communicating additional constraints, such as alignment of stride or offset,
placement within a particular memory area, etc, is out of scope of dma-buf,
and is not solved by format and modifier tokens.
Import
======
To use a buffer within a different context, device, or subsystem, the user
passes these parameters (format, modifier, width, height, and per-plane offset
and stride) to an importing API.
Each memory buffer is referred to by a buffer handle, which may be unique or
duplicated within an image. For example, a ``DRM_FORMAT_NV12`` buffer may have
the luma and chroma buffers combined into a single memory buffer by use of the
per-plane offset parameters, or they may be completely separate allocations in
memory. For this reason, each import and allocation API must provide a separate
handle for each plane.
Each kernel subsystem has its own types and interfaces for buffer management.
DRM uses GEM buffer objects (BOs), V4L2 has its own references, etc. These types
are not portable between contexts, processes, devices, or subsystems.
To address this, ``dma-buf`` handles are used as the universal interchange for
buffers. Subsystem-specific operations are used to export native buffer handles
to a ``dma-buf`` file descriptor, and to import those file descriptors into a
native buffer handle. dma-buf file descriptors can be transferred between
contexts, processes, devices, and subsystems.
For example, a Wayland media player may use V4L2 to decode a video frame into a
``DRM_FORMAT_NV12`` buffer. This will result in two memory planes (luma and
chroma) being dequeued by the user from V4L2. These planes are then exported to
one dma-buf file descriptor per plane, these descriptors are then sent along
with the metadata (format, modifier, width, height, per-plane offset and stride)
to the Wayland server. The Wayland server will then import these file
descriptors as an EGLImage for use through EGL/OpenGL (ES), a VkImage for use
through Vulkan, or a KMS framebuffer object; each of these import operations
will take the same metadata and convert the dma-buf file descriptors into their
native buffer handles.
Having a non-empty intersection of supported modifiers does not guarantee that
import will succeed into all consumers; they may have constraints beyond those
implied by modifiers which must be satisfied.
Implicit modifiers
==================
The concept of modifiers post-dates all of the subsystems mentioned above. As
such, it has been retrofitted into all of these APIs, and in order to ensure
backwards compatibility, support is needed for drivers and userspace which do
not (yet) support modifiers.
As an example, GBM is used to allocate buffers to be shared between EGL for
rendering and KMS for display. It has two entrypoints for allocating buffers:
``gbm_bo_create`` which only takes the format, width, height, and a usage token,
and ``gbm_bo_create_with_modifiers`` which extends this with a list of modifiers.
In the latter case, the allocation is as discussed above, being provided with a
list of acceptable modifiers that the implementation can choose from (or fail if
it is not possible to allocate within those constraints). In the former case
where modifiers are not provided, the GBM implementation must make its own
choice as to what is likely to be the 'best' layout. Such a choice is entirely
implementation-specific: some will internally use tiled layouts which are not
CPU-accessible if the implementation decides that is a good idea through
whatever heuristic. It is the implementation's responsibility to ensure that
this choice is appropriate.
To support this case where the layout is not known because there is no awareness
of modifiers, a special ``DRM_FORMAT_MOD_INVALID`` token has been defined. This
pseudo-modifier declares that the layout is not known, and that the driver
should use its own logic to determine what the underlying layout may be.
.. note::
``DRM_FORMAT_MOD_INVALID`` is a non-zero value. The modifier value zero is
``DRM_FORMAT_MOD_LINEAR``, which is an explicit guarantee that the image
has the linear layout. Care and attention should be taken to ensure that
zero as a default value is not mixed up with either no modifier or the linear
modifier. Also note that in some APIs the invalid modifier value is specified
with an out-of-band flag, like in ``DRM_IOCTL_MODE_ADDFB2``.
There are four cases where this token may be used:
- during enumeration, an interface may return ``DRM_FORMAT_MOD_INVALID``, either
as the sole member of a modifier list to declare that explicit modifiers are
not supported, or as part of a larger list to declare that implicit modifiers
may be used
- during allocation, a user may supply ``DRM_FORMAT_MOD_INVALID``, either as the
sole member of a modifier list (equivalent to not supplying a modifier list
at all) to declare that explicit modifiers are not supported and must not be
used, or as part of a larger list to declare that an allocation using implicit
modifiers is acceptable
- in a post-allocation query, an implementation may return
``DRM_FORMAT_MOD_INVALID`` as the modifier of the allocated buffer to declare
that the underlying layout is implementation-defined and that an explicit
modifier description is not available; per the above rules, this may only be
returned when the user has included ``DRM_FORMAT_MOD_INVALID`` as part of the
list of acceptable modifiers, or not provided a list
- when importing a buffer, the user may supply ``DRM_FORMAT_MOD_INVALID`` as the
buffer modifier (or not supply a modifier) to indicate that the modifier is
unknown for whatever reason; this is only acceptable when the buffer has
not been allocated with an explicit modifier
It follows from this that for any single buffer, the complete chain of operations
formed by the producer and all the consumers must be either fully implicit or fully
explicit. For example, if a user wishes to allocate a buffer for use between
GPU, display, and media, but the media API does not support modifiers, then the
user **must not** allocate the buffer with explicit modifiers and attempt to
import the buffer into the media API with no modifier, but either perform the
allocation using implicit modifiers, or allocate the buffer for media use
separately and copy between the two buffers.
As one exception to the above, allocations may be 'upgraded' from implicit
to explicit modifiers. For example, if the buffer is allocated with
``gbm_bo_create`` (taking no modifiers), the user may then query the modifier with
``gbm_bo_get_modifier`` and then use this modifier as an explicit modifier token
if a valid modifier is returned.
When allocating buffers for exchange between different users and modifiers are
not available, implementations are strongly encouraged to use
``DRM_FORMAT_MOD_LINEAR`` for their allocation, as this is the universal baseline
for exchange. However, it is not guaranteed that this will result in the correct
interpretation of buffer content, as implicit modifier operation may still be
subject to driver-specific heuristics.
Any new users - userspace programs and protocols, kernel subsystems, etc -
wishing to exchange buffers must offer interoperability through dma-buf file
descriptors for memory planes, DRM format tokens to describe the format, DRM
format modifiers to describe the layout in memory, at least width and height for
dimensions, and at least offset and stride for each memory plane.
.. _zwp_linux_dmabuf_v1: https://gitlab.freedesktop.org/wayland/wayland-protocols/-/blob/main/unstable/linux-dmabuf/linux-dmabuf-unstable-v1.xml
.. _VK_EXT_image_drm_format_modifier: https://registry.khronos.org/vulkan/specs/1.3-extensions/man/html/VK_EXT_image_drm_format_modifier.html
.. _EGL_EXT_image_dma_buf_import_modifiers: https://registry.khronos.org/EGL/extensions/EXT/EGL_EXT_image_dma_buf_import_modifiers.txt

View File

@ -22,6 +22,7 @@ place where this information is gathered.
unshare
spec_ctrl
accelerators/ocxl
dma-buf-alloc-exchange
ebpf/index
ELF
ioctl/index

View File

@ -1626,10 +1626,9 @@ F: drivers/gpu/drm/arm/display/include/
F: drivers/gpu/drm/arm/display/komeda/
ARM MALI PANFROST DRM DRIVER
M: Boris Brezillon <boris.brezillon@collabora.com>
M: Rob Herring <robh@kernel.org>
M: Tomeu Vizoso <tomeu.vizoso@collabora.com>
R: Steven Price <steven.price@arm.com>
R: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
L: dri-devel@lists.freedesktop.org
S: Supported
T: git git://anongit.freedesktop.org/drm/drm-misc
@ -6132,6 +6131,7 @@ L: linaro-mm-sig@lists.linaro.org (moderated for non-subscribers)
S: Maintained
T: git git://anongit.freedesktop.org/drm/drm-misc
F: Documentation/driver-api/dma-buf.rst
F: Documentation/userspace-api/dma-buf-alloc-exchange.rst
F: drivers/dma-buf/
F: include/linux/*fence.h
F: include/linux/dma-buf.h
@ -7138,6 +7138,7 @@ F: include/drm/gpu_scheduler.h
DRM PANEL DRIVERS
M: Neil Armstrong <neil.armstrong@linaro.org>
R: Jessica Zhang <quic_jesszhan@quicinc.com>
R: Sam Ravnborg <sam@ravnborg.org>
L: dri-devel@lists.freedesktop.org
S: Maintained

View File

@ -79,29 +79,30 @@ static const struct drm_info_list accel_debugfs_list[] = {
#define ACCEL_DEBUGFS_ENTRIES ARRAY_SIZE(accel_debugfs_list)
/**
* accel_debugfs_init() - Initialize debugfs for accel minor
* @minor: Pointer to the drm_minor instance.
* @minor_id: The minor's id
* accel_debugfs_init() - Initialize debugfs for device
* @dev: Pointer to the device instance.
*
* This function initializes the drm minor's debugfs members and creates
* a root directory for the minor in debugfs. It also creates common files
* for accelerators and calls the driver's debugfs init callback.
* This function creates a root directory for the device in debugfs.
*/
void accel_debugfs_init(struct drm_minor *minor, int minor_id)
void accel_debugfs_init(struct drm_device *dev)
{
struct drm_device *dev = minor->dev;
char name[64];
drm_debugfs_dev_init(dev, accel_debugfs_root);
}
INIT_LIST_HEAD(&minor->debugfs_list);
mutex_init(&minor->debugfs_lock);
sprintf(name, "%d", minor_id);
minor->debugfs_root = debugfs_create_dir(name, accel_debugfs_root);
/**
* accel_debugfs_register() - Register debugfs for device
* @dev: Pointer to the device instance.
*
* Creates common files for accelerators.
*/
void accel_debugfs_register(struct drm_device *dev)
{
struct drm_minor *minor = dev->accel;
minor->debugfs_root = dev->debugfs_root;
drm_debugfs_create_files(accel_debugfs_list, ACCEL_DEBUGFS_ENTRIES,
minor->debugfs_root, minor);
if (dev->driver->debugfs_init)
dev->driver->debugfs_init(minor);
dev->debugfs_root, minor);
}
/**

View File

@ -518,78 +518,52 @@ static int ivpu_dev_init(struct ivpu_device *vdev)
lockdep_set_class(&vdev->submitted_jobs_xa.xa_lock, &submitted_jobs_xa_lock_class_key);
ret = ivpu_pci_init(vdev);
if (ret) {
ivpu_err(vdev, "Failed to initialize PCI device: %d\n", ret);
if (ret)
goto err_xa_destroy;
}
ret = ivpu_irq_init(vdev);
if (ret) {
ivpu_err(vdev, "Failed to initialize IRQs: %d\n", ret);
if (ret)
goto err_xa_destroy;
}
/* Init basic HW info based on buttress registers which are accessible before power up */
ret = ivpu_hw_info_init(vdev);
if (ret) {
ivpu_err(vdev, "Failed to initialize HW info: %d\n", ret);
if (ret)
goto err_xa_destroy;
}
/* Power up early so the rest of init code can access VPU registers */
ret = ivpu_hw_power_up(vdev);
if (ret) {
ivpu_err(vdev, "Failed to power up HW: %d\n", ret);
if (ret)
goto err_xa_destroy;
}
ret = ivpu_mmu_global_context_init(vdev);
if (ret) {
ivpu_err(vdev, "Failed to initialize global MMU context: %d\n", ret);
if (ret)
goto err_power_down;
}
ret = ivpu_mmu_init(vdev);
if (ret) {
ivpu_err(vdev, "Failed to initialize MMU device: %d\n", ret);
if (ret)
goto err_mmu_gctx_fini;
ret = ivpu_mmu_reserved_context_init(vdev);
if (ret)
goto err_mmu_gctx_fini;
}
ret = ivpu_fw_init(vdev);
if (ret) {
ivpu_err(vdev, "Failed to initialize firmware: %d\n", ret);
goto err_mmu_gctx_fini;
}
if (ret)
goto err_mmu_rctx_fini;
ret = ivpu_ipc_init(vdev);
if (ret) {
ivpu_err(vdev, "Failed to initialize IPC: %d\n", ret);
if (ret)
goto err_fw_fini;
}
ret = ivpu_pm_init(vdev);
if (ret) {
ivpu_err(vdev, "Failed to initialize PM: %d\n", ret);
goto err_ipc_fini;
}
ivpu_pm_init(vdev);
ret = ivpu_job_done_thread_init(vdev);
if (ret) {
ivpu_err(vdev, "Failed to initialize job done thread: %d\n", ret);
if (ret)
goto err_ipc_fini;
}
ret = ivpu_fw_load(vdev);
if (ret) {
ivpu_err(vdev, "Failed to load firmware: %d\n", ret);
goto err_job_done_thread_fini;
}
ret = ivpu_boot(vdev);
if (ret) {
ivpu_err(vdev, "Failed to boot: %d\n", ret);
if (ret)
goto err_job_done_thread_fini;
}
ivpu_pm_enable(vdev);
@ -601,6 +575,8 @@ err_ipc_fini:
ivpu_ipc_fini(vdev);
err_fw_fini:
ivpu_fw_fini(vdev);
err_mmu_rctx_fini:
ivpu_mmu_reserved_context_fini(vdev);
err_mmu_gctx_fini:
ivpu_mmu_global_context_fini(vdev);
err_power_down:
@ -624,6 +600,7 @@ static void ivpu_dev_fini(struct ivpu_device *vdev)
ivpu_ipc_fini(vdev);
ivpu_fw_fini(vdev);
ivpu_mmu_reserved_context_fini(vdev);
ivpu_mmu_global_context_fini(vdev);
drm_WARN_ON(&vdev->drm, !xa_empty(&vdev->submitted_jobs_xa));
@ -651,10 +628,8 @@ static int ivpu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
pci_set_drvdata(pdev, vdev);
ret = ivpu_dev_init(vdev);
if (ret) {
dev_err(&pdev->dev, "Failed to initialize VPU device: %d\n", ret);
if (ret)
return ret;
}
ret = drm_dev_register(&vdev->drm, 0);
if (ret) {

View File

@ -28,12 +28,13 @@
#define IVPU_HW_37XX 37
#define IVPU_HW_40XX 40
#define IVPU_GLOBAL_CONTEXT_MMU_SSID 0
/* SSID 1 is used by the VPU to represent invalid context */
#define IVPU_USER_CONTEXT_MIN_SSID 2
#define IVPU_USER_CONTEXT_MAX_SSID (IVPU_USER_CONTEXT_MIN_SSID + 63)
#define IVPU_GLOBAL_CONTEXT_MMU_SSID 0
/* SSID 1 is used by the VPU to represent reserved context */
#define IVPU_RESERVED_CONTEXT_MMU_SSID 1
#define IVPU_USER_CONTEXT_MIN_SSID 2
#define IVPU_USER_CONTEXT_MAX_SSID (IVPU_USER_CONTEXT_MIN_SSID + 63)
#define IVPU_NUM_ENGINES 2
#define IVPU_NUM_ENGINES 2
#define IVPU_PLATFORM_SILICON 0
#define IVPU_PLATFORM_SIMICS 2
@ -75,6 +76,11 @@
#define IVPU_WA(wa_name) (vdev->wa.wa_name)
#define IVPU_PRINT_WA(wa_name) do { \
if (IVPU_WA(wa_name)) \
ivpu_dbg(vdev, MISC, "Using WA: " #wa_name "\n"); \
} while (0)
struct ivpu_wa_table {
bool punit_disabled;
bool clear_runtime_mem;
@ -104,6 +110,7 @@ struct ivpu_device {
struct ivpu_pm_info *pm;
struct ivpu_mmu_context gctx;
struct ivpu_mmu_context rctx;
struct xarray context_xa;
struct xa_limit context_xa_limit;
@ -117,6 +124,7 @@ struct ivpu_device {
int jsm;
int tdr;
int reschedule_suspend;
int autosuspend;
} timeout;
};

View File

@ -301,6 +301,8 @@ int ivpu_fw_init(struct ivpu_device *vdev)
if (ret)
goto err_fw_release;
ivpu_fw_load(vdev);
return 0;
err_fw_release:
@ -314,7 +316,7 @@ void ivpu_fw_fini(struct ivpu_device *vdev)
ivpu_fw_release(vdev);
}
int ivpu_fw_load(struct ivpu_device *vdev)
void ivpu_fw_load(struct ivpu_device *vdev)
{
struct ivpu_fw_info *fw = vdev->fw;
u64 image_end_offset = fw->image_load_offset + fw->image_size;
@ -331,8 +333,6 @@ int ivpu_fw_load(struct ivpu_device *vdev)
}
wmb(); /* Flush WC buffers after writing fw->mem */
return 0;
}
static void ivpu_fw_boot_params_print(struct ivpu_device *vdev, struct vpu_boot_params *boot_params)

View File

@ -31,7 +31,7 @@ struct ivpu_fw_info {
int ivpu_fw_init(struct ivpu_device *vdev);
void ivpu_fw_fini(struct ivpu_device *vdev);
int ivpu_fw_load(struct ivpu_device *vdev);
void ivpu_fw_load(struct ivpu_device *vdev);
void ivpu_fw_boot_params_setup(struct ivpu_device *vdev, struct vpu_boot_params *bp);
static inline bool ivpu_fw_is_cold_boot(struct ivpu_device *vdev)

View File

@ -104,6 +104,11 @@ static void ivpu_hw_wa_init(struct ivpu_device *vdev)
if (ivpu_device_id(vdev) == PCI_DEVICE_ID_MTL && ivpu_revision(vdev) < 4)
vdev->wa.interrupt_clear_with_0 = true;
IVPU_PRINT_WA(punit_disabled);
IVPU_PRINT_WA(clear_runtime_mem);
IVPU_PRINT_WA(d3hot_after_power_off);
IVPU_PRINT_WA(interrupt_clear_with_0);
}
static void ivpu_hw_timeouts_init(struct ivpu_device *vdev)
@ -113,11 +118,13 @@ static void ivpu_hw_timeouts_init(struct ivpu_device *vdev)
vdev->timeout.jsm = 50000;
vdev->timeout.tdr = 2000000;
vdev->timeout.reschedule_suspend = 1000;
vdev->timeout.autosuspend = -1;
} else {
vdev->timeout.boot = 1000;
vdev->timeout.jsm = 500;
vdev->timeout.tdr = 2000;
vdev->timeout.reschedule_suspend = 10;
vdev->timeout.autosuspend = 10;
}
}
@ -345,10 +352,10 @@ static int ivpu_boot_noc_qdeny_check(struct ivpu_device *vdev, u32 exp_val)
static int ivpu_boot_top_noc_qrenqn_check(struct ivpu_device *vdev, u32 exp_val)
{
u32 val = REGV_RD32(MTL_VPU_TOP_NOC_QREQN);
u32 val = REGV_RD32(VPU_37XX_TOP_NOC_QREQN);
if (!REG_TEST_FLD_NUM(MTL_VPU_TOP_NOC_QREQN, CPU_CTRL, exp_val, val) ||
!REG_TEST_FLD_NUM(MTL_VPU_TOP_NOC_QREQN, HOSTIF_L2CACHE, exp_val, val))
if (!REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QREQN, CPU_CTRL, exp_val, val) ||
!REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QREQN, HOSTIF_L2CACHE, exp_val, val))
return -EIO;
return 0;
@ -356,10 +363,10 @@ static int ivpu_boot_top_noc_qrenqn_check(struct ivpu_device *vdev, u32 exp_val)
static int ivpu_boot_top_noc_qacceptn_check(struct ivpu_device *vdev, u32 exp_val)
{
u32 val = REGV_RD32(MTL_VPU_TOP_NOC_QACCEPTN);
u32 val = REGV_RD32(VPU_37XX_TOP_NOC_QACCEPTN);
if (!REG_TEST_FLD_NUM(MTL_VPU_TOP_NOC_QACCEPTN, CPU_CTRL, exp_val, val) ||
!REG_TEST_FLD_NUM(MTL_VPU_TOP_NOC_QACCEPTN, HOSTIF_L2CACHE, exp_val, val))
if (!REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QACCEPTN, CPU_CTRL, exp_val, val) ||
!REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QACCEPTN, HOSTIF_L2CACHE, exp_val, val))
return -EIO;
return 0;
@ -367,10 +374,10 @@ static int ivpu_boot_top_noc_qacceptn_check(struct ivpu_device *vdev, u32 exp_va
static int ivpu_boot_top_noc_qdeny_check(struct ivpu_device *vdev, u32 exp_val)
{
u32 val = REGV_RD32(MTL_VPU_TOP_NOC_QDENY);
u32 val = REGV_RD32(VPU_37XX_TOP_NOC_QDENY);
if (!REG_TEST_FLD_NUM(MTL_VPU_TOP_NOC_QDENY, CPU_CTRL, exp_val, val) ||
!REG_TEST_FLD_NUM(MTL_VPU_TOP_NOC_QDENY, HOSTIF_L2CACHE, exp_val, val))
if (!REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QDENY, CPU_CTRL, exp_val, val) ||
!REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QDENY, HOSTIF_L2CACHE, exp_val, val))
return -EIO;
return 0;
@ -423,15 +430,15 @@ static int ivpu_boot_host_ss_top_noc_drive(struct ivpu_device *vdev, bool enable
int ret;
u32 val;
val = REGV_RD32(MTL_VPU_TOP_NOC_QREQN);
val = REGV_RD32(VPU_37XX_TOP_NOC_QREQN);
if (enable) {
val = REG_SET_FLD(MTL_VPU_TOP_NOC_QREQN, CPU_CTRL, val);
val = REG_SET_FLD(MTL_VPU_TOP_NOC_QREQN, HOSTIF_L2CACHE, val);
val = REG_SET_FLD(VPU_37XX_TOP_NOC_QREQN, CPU_CTRL, val);
val = REG_SET_FLD(VPU_37XX_TOP_NOC_QREQN, HOSTIF_L2CACHE, val);
} else {
val = REG_CLR_FLD(MTL_VPU_TOP_NOC_QREQN, CPU_CTRL, val);
val = REG_CLR_FLD(MTL_VPU_TOP_NOC_QREQN, HOSTIF_L2CACHE, val);
val = REG_CLR_FLD(VPU_37XX_TOP_NOC_QREQN, CPU_CTRL, val);
val = REG_CLR_FLD(VPU_37XX_TOP_NOC_QREQN, HOSTIF_L2CACHE, val);
}
REGV_WR32(MTL_VPU_TOP_NOC_QREQN, val);
REGV_WR32(VPU_37XX_TOP_NOC_QREQN, val);
ret = ivpu_boot_top_noc_qacceptn_check(vdev, enable ? 0x1 : 0x0);
if (ret) {
@ -563,17 +570,17 @@ static void ivpu_boot_soc_cpu_boot(struct ivpu_device *vdev)
{
u32 val;
val = REGV_RD32(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC);
val = REG_SET_FLD(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RSTRUN0, val);
val = REGV_RD32(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC);
val = REG_SET_FLD(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RSTRUN0, val);
val = REG_CLR_FLD(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RSTVEC, val);
REGV_WR32(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val);
val = REG_CLR_FLD(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RSTVEC, val);
REGV_WR32(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val);
val = REG_SET_FLD(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RESUME0, val);
REGV_WR32(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val);
val = REG_SET_FLD(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RESUME0, val);
REGV_WR32(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val);
val = REG_CLR_FLD(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RESUME0, val);
REGV_WR32(MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val);
val = REG_CLR_FLD(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RESUME0, val);
REGV_WR32(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val);
val = vdev->fw->entry_point >> 9;
REGV_WR32(VPU_37XX_HOST_SS_LOADING_ADDRESS_LO, val);
@ -777,17 +784,17 @@ static void ivpu_hw_37xx_wdt_disable(struct ivpu_device *vdev)
u32 val;
/* Enable writing and set non-zero WDT value */
REGV_WR32(MTL_VPU_CPU_SS_TIM_SAFE, TIM_SAFE_ENABLE);
REGV_WR32(MTL_VPU_CPU_SS_TIM_WATCHDOG, TIM_WATCHDOG_RESET_VALUE);
REGV_WR32(VPU_37XX_CPU_SS_TIM_SAFE, TIM_SAFE_ENABLE);
REGV_WR32(VPU_37XX_CPU_SS_TIM_WATCHDOG, TIM_WATCHDOG_RESET_VALUE);
/* Enable writing and disable watchdog timer */
REGV_WR32(MTL_VPU_CPU_SS_TIM_SAFE, TIM_SAFE_ENABLE);
REGV_WR32(MTL_VPU_CPU_SS_TIM_WDOG_EN, 0);
REGV_WR32(VPU_37XX_CPU_SS_TIM_SAFE, TIM_SAFE_ENABLE);
REGV_WR32(VPU_37XX_CPU_SS_TIM_WDOG_EN, 0);
/* Now clear the timeout interrupt */
val = REGV_RD32(MTL_VPU_CPU_SS_TIM_GEN_CONFIG);
val = REG_CLR_FLD(MTL_VPU_CPU_SS_TIM_GEN_CONFIG, WDOG_TO_INT_CLR, val);
REGV_WR32(MTL_VPU_CPU_SS_TIM_GEN_CONFIG, val);
val = REGV_RD32(VPU_37XX_CPU_SS_TIM_GEN_CONFIG);
val = REG_CLR_FLD(VPU_37XX_CPU_SS_TIM_GEN_CONFIG, WDOG_TO_INT_CLR, val);
REGV_WR32(VPU_37XX_CPU_SS_TIM_GEN_CONFIG, val);
}
static u32 ivpu_hw_37xx_pll_to_freq(u32 ratio, u32 config)
@ -834,10 +841,10 @@ static u32 ivpu_hw_37xx_reg_telemetry_enable_get(struct ivpu_device *vdev)
static void ivpu_hw_37xx_reg_db_set(struct ivpu_device *vdev, u32 db_id)
{
u32 reg_stride = MTL_VPU_CPU_SS_DOORBELL_1 - MTL_VPU_CPU_SS_DOORBELL_0;
u32 val = REG_FLD(MTL_VPU_CPU_SS_DOORBELL_0, SET);
u32 reg_stride = VPU_37XX_CPU_SS_DOORBELL_1 - VPU_37XX_CPU_SS_DOORBELL_0;
u32 val = REG_FLD(VPU_37XX_CPU_SS_DOORBELL_0, SET);
REGV_WR32I(MTL_VPU_CPU_SS_DOORBELL_0, reg_stride, db_id, val);
REGV_WR32I(VPU_37XX_CPU_SS_DOORBELL_0, reg_stride, db_id, val);
}
static u32 ivpu_hw_37xx_reg_ipc_rx_addr_get(struct ivpu_device *vdev)
@ -854,7 +861,7 @@ static u32 ivpu_hw_37xx_reg_ipc_rx_count_get(struct ivpu_device *vdev)
static void ivpu_hw_37xx_reg_ipc_tx_set(struct ivpu_device *vdev, u32 vpu_addr)
{
REGV_WR32(MTL_VPU_CPU_SS_TIM_IPC_FIFO, vpu_addr);
REGV_WR32(VPU_37XX_CPU_SS_TIM_IPC_FIFO, vpu_addr);
}
static void ivpu_hw_37xx_irq_clear(struct ivpu_device *vdev)

View File

@ -3,70 +3,70 @@
* Copyright (C) 2020-2023 Intel Corporation
*/
#ifndef __IVPU_HW_MTL_REG_H__
#define __IVPU_HW_MTL_REG_H__
#ifndef __IVPU_HW_37XX_REG_H__
#define __IVPU_HW_37XX_REG_H__
#include <linux/bits.h>
#define VPU_37XX_BUTTRESS_INTERRUPT_TYPE 0x00000000u
#define VPU_37XX_BUTTRESS_INTERRUPT_TYPE 0x00000000u
#define VPU_37XX_BUTTRESS_INTERRUPT_STAT 0x00000004u
#define VPU_37XX_BUTTRESS_INTERRUPT_STAT_FREQ_CHANGE_MASK BIT_MASK(0)
#define VPU_37XX_BUTTRESS_INTERRUPT_STAT 0x00000004u
#define VPU_37XX_BUTTRESS_INTERRUPT_STAT_FREQ_CHANGE_MASK BIT_MASK(0)
#define VPU_37XX_BUTTRESS_INTERRUPT_STAT_ATS_ERR_MASK BIT_MASK(1)
#define VPU_37XX_BUTTRESS_INTERRUPT_STAT_UFI_ERR_MASK BIT_MASK(2)
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD0 0x00000008u
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD0_MIN_RATIO_MASK GENMASK(15, 0)
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD0_MAX_RATIO_MASK GENMASK(31, 16)
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD0 0x00000008u
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD0_MIN_RATIO_MASK GENMASK(15, 0)
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD0_MAX_RATIO_MASK GENMASK(31, 16)
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD1 0x0000000cu
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD1_TARGET_RATIO_MASK GENMASK(15, 0)
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD1_EPP_MASK GENMASK(31, 16)
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD1 0x0000000cu
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD1_TARGET_RATIO_MASK GENMASK(15, 0)
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD1_EPP_MASK GENMASK(31, 16)
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD2 0x00000010u
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD2 0x00000010u
#define VPU_37XX_BUTTRESS_WP_REQ_PAYLOAD2_CONFIG_MASK GENMASK(15, 0)
#define VPU_37XX_BUTTRESS_WP_REQ_CMD 0x00000014u
#define VPU_37XX_BUTTRESS_WP_REQ_CMD 0x00000014u
#define VPU_37XX_BUTTRESS_WP_REQ_CMD_SEND_MASK BIT_MASK(0)
#define VPU_37XX_BUTTRESS_WP_DOWNLOAD 0x00000018u
#define VPU_37XX_BUTTRESS_WP_DOWNLOAD_TARGET_RATIO_MASK GENMASK(15, 0)
#define VPU_37XX_BUTTRESS_CURRENT_PLL 0x0000001cu
#define VPU_37XX_BUTTRESS_CURRENT_PLL_RATIO_MASK GENMASK(15, 0)
#define VPU_37XX_BUTTRESS_CURRENT_PLL_RATIO_MASK GENMASK(15, 0)
#define VPU_37XX_BUTTRESS_PLL_ENABLE 0x00000020u
#define VPU_37XX_BUTTRESS_PLL_ENABLE 0x00000020u
#define VPU_37XX_BUTTRESS_FMIN_FUSE 0x00000024u
#define VPU_37XX_BUTTRESS_FMIN_FUSE_MIN_RATIO_MASK GENMASK(7, 0)
#define VPU_37XX_BUTTRESS_FMIN_FUSE_PN_RATIO_MASK GENMASK(15, 8)
#define VPU_37XX_BUTTRESS_FMIN_FUSE 0x00000024u
#define VPU_37XX_BUTTRESS_FMIN_FUSE_MIN_RATIO_MASK GENMASK(7, 0)
#define VPU_37XX_BUTTRESS_FMIN_FUSE_PN_RATIO_MASK GENMASK(15, 8)
#define VPU_37XX_BUTTRESS_FMAX_FUSE 0x00000028u
#define VPU_37XX_BUTTRESS_FMAX_FUSE_MAX_RATIO_MASK GENMASK(7, 0)
#define VPU_37XX_BUTTRESS_FMAX_FUSE 0x00000028u
#define VPU_37XX_BUTTRESS_FMAX_FUSE_MAX_RATIO_MASK GENMASK(7, 0)
#define VPU_37XX_BUTTRESS_TILE_FUSE 0x0000002cu
#define VPU_37XX_BUTTRESS_TILE_FUSE 0x0000002cu
#define VPU_37XX_BUTTRESS_TILE_FUSE_VALID_MASK BIT_MASK(0)
#define VPU_37XX_BUTTRESS_TILE_FUSE_SKU_MASK GENMASK(3, 2)
#define VPU_37XX_BUTTRESS_TILE_FUSE_SKU_MASK GENMASK(3, 2)
#define VPU_37XX_BUTTRESS_LOCAL_INT_MASK 0x00000030u
#define VPU_37XX_BUTTRESS_GLOBAL_INT_MASK 0x00000034u
#define VPU_37XX_BUTTRESS_LOCAL_INT_MASK 0x00000030u
#define VPU_37XX_BUTTRESS_GLOBAL_INT_MASK 0x00000034u
#define VPU_37XX_BUTTRESS_PLL_STATUS 0x00000040u
#define VPU_37XX_BUTTRESS_PLL_STATUS 0x00000040u
#define VPU_37XX_BUTTRESS_PLL_STATUS_LOCK_MASK BIT_MASK(1)
#define VPU_37XX_BUTTRESS_VPU_STATUS 0x00000044u
#define VPU_37XX_BUTTRESS_VPU_STATUS 0x00000044u
#define VPU_37XX_BUTTRESS_VPU_STATUS_READY_MASK BIT_MASK(0)
#define VPU_37XX_BUTTRESS_VPU_STATUS_IDLE_MASK BIT_MASK(1)
#define VPU_37XX_BUTTRESS_VPU_D0I3_CONTROL 0x00000060u
#define VPU_37XX_BUTTRESS_VPU_D0I3_CONTROL_INPROGRESS_MASK BIT_MASK(0)
#define VPU_37XX_BUTTRESS_VPU_D0I3_CONTROL_I3_MASK BIT_MASK(2)
#define VPU_37XX_BUTTRESS_VPU_D0I3_CONTROL 0x00000060u
#define VPU_37XX_BUTTRESS_VPU_D0I3_CONTROL_INPROGRESS_MASK BIT_MASK(0)
#define VPU_37XX_BUTTRESS_VPU_D0I3_CONTROL_I3_MASK BIT_MASK(2)
#define VPU_37XX_BUTTRESS_VPU_IP_RESET 0x00000050u
#define VPU_37XX_BUTTRESS_VPU_IP_RESET_TRIGGER_MASK BIT_MASK(0)
#define VPU_37XX_BUTTRESS_VPU_IP_RESET_TRIGGER_MASK BIT_MASK(0)
#define VPU_37XX_BUTTRESS_VPU_TELEMETRY_OFFSET 0x00000080u
#define VPU_37XX_BUTTRESS_VPU_TELEMETRY_SIZE 0x00000084u
#define VPU_37XX_BUTTRESS_VPU_TELEMETRY_SIZE 0x00000084u
#define VPU_37XX_BUTTRESS_VPU_TELEMETRY_ENABLE 0x00000088u
#define VPU_37XX_BUTTRESS_ATS_ERR_LOG_0 0x000000a0u
@ -74,9 +74,9 @@
#define VPU_37XX_BUTTRESS_ATS_ERR_CLEAR 0x000000a8u
#define VPU_37XX_BUTTRESS_UFI_ERR_LOG 0x000000b0u
#define VPU_37XX_BUTTRESS_UFI_ERR_LOG_CQ_ID_MASK GENMASK(11, 0)
#define VPU_37XX_BUTTRESS_UFI_ERR_LOG_AXI_ID_MASK GENMASK(19, 12)
#define VPU_37XX_BUTTRESS_UFI_ERR_LOG_OPCODE_MASK GENMASK(24, 20)
#define VPU_37XX_BUTTRESS_UFI_ERR_LOG_CQ_ID_MASK GENMASK(11, 0)
#define VPU_37XX_BUTTRESS_UFI_ERR_LOG_AXI_ID_MASK GENMASK(19, 12)
#define VPU_37XX_BUTTRESS_UFI_ERR_LOG_OPCODE_MASK GENMASK(24, 20)
#define VPU_37XX_BUTTRESS_UFI_ERR_CLEAR 0x000000b4u
@ -113,17 +113,17 @@
#define VPU_37XX_HOST_SS_NOC_QDENY 0x0000015cu
#define VPU_37XX_HOST_SS_NOC_QDENY_TOP_SOCMMIO_MASK BIT_MASK(0)
#define MTL_VPU_TOP_NOC_QREQN 0x00000160u
#define MTL_VPU_TOP_NOC_QREQN_CPU_CTRL_MASK BIT_MASK(0)
#define MTL_VPU_TOP_NOC_QREQN_HOSTIF_L2CACHE_MASK BIT_MASK(1)
#define VPU_37XX_TOP_NOC_QREQN 0x00000160u
#define VPU_37XX_TOP_NOC_QREQN_CPU_CTRL_MASK BIT_MASK(0)
#define VPU_37XX_TOP_NOC_QREQN_HOSTIF_L2CACHE_MASK BIT_MASK(1)
#define MTL_VPU_TOP_NOC_QACCEPTN 0x00000164u
#define MTL_VPU_TOP_NOC_QACCEPTN_CPU_CTRL_MASK BIT_MASK(0)
#define MTL_VPU_TOP_NOC_QACCEPTN_HOSTIF_L2CACHE_MASK BIT_MASK(1)
#define VPU_37XX_TOP_NOC_QACCEPTN 0x00000164u
#define VPU_37XX_TOP_NOC_QACCEPTN_CPU_CTRL_MASK BIT_MASK(0)
#define VPU_37XX_TOP_NOC_QACCEPTN_HOSTIF_L2CACHE_MASK BIT_MASK(1)
#define MTL_VPU_TOP_NOC_QDENY 0x00000168u
#define MTL_VPU_TOP_NOC_QDENY_CPU_CTRL_MASK BIT_MASK(0)
#define MTL_VPU_TOP_NOC_QDENY_HOSTIF_L2CACHE_MASK BIT_MASK(1)
#define VPU_37XX_TOP_NOC_QDENY 0x00000168u
#define VPU_37XX_TOP_NOC_QDENY_CPU_CTRL_MASK BIT_MASK(0)
#define VPU_37XX_TOP_NOC_QDENY_HOSTIF_L2CACHE_MASK BIT_MASK(1)
#define VPU_37XX_HOST_SS_FW_SOC_IRQ_EN 0x00000170u
#define VPU_37XX_HOST_SS_FW_SOC_IRQ_EN_CSS_ROM_CMX_MASK BIT_MASK(0)
@ -140,9 +140,9 @@
#define VPU_37XX_HOST_SS_ICB_STATUS_0_TIMER_2_INT_MASK BIT_MASK(2)
#define VPU_37XX_HOST_SS_ICB_STATUS_0_TIMER_3_INT_MASK BIT_MASK(3)
#define VPU_37XX_HOST_SS_ICB_STATUS_0_HOST_IPC_FIFO_INT_MASK BIT_MASK(4)
#define VPU_37XX_HOST_SS_ICB_STATUS_0_MMU_IRQ_0_INT_MASK BIT_MASK(5)
#define VPU_37XX_HOST_SS_ICB_STATUS_0_MMU_IRQ_1_INT_MASK BIT_MASK(6)
#define VPU_37XX_HOST_SS_ICB_STATUS_0_MMU_IRQ_2_INT_MASK BIT_MASK(7)
#define VPU_37XX_HOST_SS_ICB_STATUS_0_MMU_IRQ_0_INT_MASK BIT_MASK(5)
#define VPU_37XX_HOST_SS_ICB_STATUS_0_MMU_IRQ_1_INT_MASK BIT_MASK(6)
#define VPU_37XX_HOST_SS_ICB_STATUS_0_MMU_IRQ_2_INT_MASK BIT_MASK(7)
#define VPU_37XX_HOST_SS_ICB_STATUS_0_NOC_FIREWALL_INT_MASK BIT_MASK(8)
#define VPU_37XX_HOST_SS_ICB_STATUS_0_CPU_INT_REDIRECT_0_INT_MASK BIT_MASK(30)
#define VPU_37XX_HOST_SS_ICB_STATUS_0_CPU_INT_REDIRECT_1_INT_MASK BIT_MASK(31)
@ -164,14 +164,14 @@
#define VPU_37XX_HOST_SS_TIM_IPC_FIFO_STAT_FILL_LEVEL_MASK GENMASK(23, 16)
#define VPU_37XX_HOST_SS_TIM_IPC_FIFO_STAT_RSVD0_MASK GENMASK(31, 24)
#define VPU_37XX_HOST_SS_AON_PWR_ISO_EN0 0x00030020u
#define VPU_37XX_HOST_SS_AON_PWR_ISO_EN0 0x00030020u
#define VPU_37XX_HOST_SS_AON_PWR_ISO_EN0_MSS_CPU_MASK BIT_MASK(3)
#define VPU_37XX_HOST_SS_AON_PWR_ISLAND_EN0 0x00030024u
#define VPU_37XX_HOST_SS_AON_PWR_ISLAND_EN0_MSS_CPU_MASK BIT_MASK(3)
#define VPU_37XX_HOST_SS_AON_PWR_ISLAND_EN0_MSS_CPU_MASK BIT_MASK(3)
#define VPU_37XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0 0x00030028u
#define VPU_37XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0_MSS_CPU_MASK BIT_MASK(3)
#define VPU_37XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0_MSS_CPU_MASK BIT_MASK(3)
#define VPU_37XX_HOST_SS_AON_PWR_ISLAND_STATUS0 0x0003002cu
#define VPU_37XX_HOST_SS_AON_PWR_ISLAND_STATUS0_MSS_CPU_MASK BIT_MASK(3)
@ -187,47 +187,14 @@
#define VPU_37XX_HOST_SS_LOADING_ADDRESS_LO_IOSF_RS_ID_MASK GENMASK(2, 1)
#define VPU_37XX_HOST_SS_LOADING_ADDRESS_LO_IMAGE_LOCATION_MASK GENMASK(31, 3)
#define VPU_37XX_HOST_SS_WORKPOINT_CONFIG_MIRROR 0x00082020u
#define VPU_37XX_HOST_SS_WORKPOINT_CONFIG_MIRROR 0x00082020u
#define VPU_37XX_HOST_SS_WORKPOINT_CONFIG_MIRROR_FINAL_PLL_FREQ_MASK GENMASK(15, 0)
#define VPU_37XX_HOST_SS_WORKPOINT_CONFIG_MIRROR_CONFIG_ID_MASK GENMASK(31, 16)
#define VPU_37XX_HOST_MMU_IDR0 0x00200000u
#define VPU_37XX_HOST_MMU_IDR1 0x00200004u
#define VPU_37XX_HOST_MMU_IDR3 0x0020000cu
#define VPU_37XX_HOST_MMU_IDR5 0x00200014u
#define VPU_37XX_HOST_MMU_CR0 0x00200020u
#define VPU_37XX_HOST_MMU_CR0ACK 0x00200024u
#define VPU_37XX_HOST_MMU_CR1 0x00200028u
#define VPU_37XX_HOST_MMU_CR2 0x0020002cu
#define VPU_37XX_HOST_MMU_IRQ_CTRL 0x00200050u
#define VPU_37XX_HOST_MMU_IRQ_CTRLACK 0x00200054u
#define VPU_37XX_HOST_MMU_GERROR 0x00200060u
#define VPU_37XX_HOST_MMU_GERROR_CMDQ_MASK BIT_MASK(0)
#define VPU_37XX_HOST_MMU_GERROR_EVTQ_ABT_MASK BIT_MASK(2)
#define VPU_37XX_HOST_MMU_GERROR_PRIQ_ABT_MASK BIT_MASK(3)
#define VPU_37XX_HOST_MMU_GERROR_MSI_CMDQ_ABT_MASK BIT_MASK(4)
#define VPU_37XX_HOST_MMU_GERROR_MSI_EVTQ_ABT_MASK BIT_MASK(5)
#define VPU_37XX_HOST_MMU_GERROR_MSI_PRIQ_ABT_MASK BIT_MASK(6)
#define VPU_37XX_HOST_MMU_GERROR_MSI_ABT_MASK BIT_MASK(7)
#define VPU_37XX_HOST_MMU_GERRORN 0x00200064u
#define VPU_37XX_HOST_MMU_STRTAB_BASE 0x00200080u
#define VPU_37XX_HOST_MMU_STRTAB_BASE_CFG 0x00200088u
#define VPU_37XX_HOST_MMU_CMDQ_BASE 0x00200090u
#define VPU_37XX_HOST_MMU_CMDQ_PROD 0x00200098u
#define VPU_37XX_HOST_MMU_CMDQ_CONS 0x0020009cu
#define VPU_37XX_HOST_MMU_EVTQ_BASE 0x002000a0u
#define VPU_37XX_HOST_MMU_EVTQ_PROD 0x002000a8u
#define VPU_37XX_HOST_MMU_EVTQ_CONS 0x002000acu
#define VPU_37XX_HOST_MMU_EVTQ_PROD_SEC (0x002000a8u + SZ_64K)
#define VPU_37XX_HOST_MMU_EVTQ_CONS_SEC (0x002000acu + SZ_64K)
#define VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES 0x00360000u
#define VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES_CACHE_OVERRIDE_EN_MASK BIT_MASK(0)
#define VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES_AWCACHE_OVERRIDE_MASK BIT_MASK(1)
#define VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES_ARCACHE_OVERRIDE_MASK BIT_MASK(2)
#define VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES_AWCACHE_OVERRIDE_MASK BIT_MASK(1)
#define VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES_ARCACHE_OVERRIDE_MASK BIT_MASK(2)
#define VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES_NOSNOOP_OVERRIDE_EN_MASK BIT_MASK(3)
#define VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES_AW_NOSNOOP_OVERRIDE_MASK BIT_MASK(4)
#define VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES_AR_NOSNOOP_OVERRIDE_MASK BIT_MASK(5)
@ -246,36 +213,36 @@
#define VPU_37XX_HOST_IF_TBU_MMUSSIDV_TBU4_AWMMUSSIDV_MASK BIT_MASK(8)
#define VPU_37XX_HOST_IF_TBU_MMUSSIDV_TBU4_ARMMUSSIDV_MASK BIT_MASK(9)
#define MTL_VPU_CPU_SS_DSU_LEON_RT_BASE 0x04000000u
#define MTL_VPU_CPU_SS_DSU_LEON_RT_DSU_CTRL 0x04000000u
#define MTL_VPU_CPU_SS_DSU_LEON_RT_PC_REG 0x04400010u
#define MTL_VPU_CPU_SS_DSU_LEON_RT_NPC_REG 0x04400014u
#define MTL_VPU_CPU_SS_DSU_LEON_RT_DSU_TRAP_REG 0x04400020u
#define VPU_37XX_CPU_SS_DSU_LEON_RT_BASE 0x04000000u
#define VPU_37XX_CPU_SS_DSU_LEON_RT_DSU_CTRL 0x04000000u
#define VPU_37XX_CPU_SS_DSU_LEON_RT_PC_REG 0x04400010u
#define VPU_37XX_CPU_SS_DSU_LEON_RT_NPC_REG 0x04400014u
#define VPU_37XX_CPU_SS_DSU_LEON_RT_DSU_TRAP_REG 0x04400020u
#define MTL_VPU_CPU_SS_MSSCPU_CPR_CLK_SET 0x06010004u
#define MTL_VPU_CPU_SS_MSSCPU_CPR_CLK_SET_CPU_DSU_MASK BIT_MASK(1)
#define VPU_37XX_CPU_SS_MSSCPU_CPR_CLK_SET 0x06010004u
#define VPU_37XX_CPU_SS_MSSCPU_CPR_CLK_SET_CPU_DSU_MASK BIT_MASK(1)
#define MTL_VPU_CPU_SS_MSSCPU_CPR_RST_CLR 0x06010018u
#define MTL_VPU_CPU_SS_MSSCPU_CPR_RST_CLR_CPU_DSU_MASK BIT_MASK(1)
#define VPU_37XX_CPU_SS_MSSCPU_CPR_RST_CLR 0x06010018u
#define VPU_37XX_CPU_SS_MSSCPU_CPR_RST_CLR_CPU_DSU_MASK BIT_MASK(1)
#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC 0x06010040u
#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTRUN0_MASK BIT_MASK(0)
#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RESUME0_MASK BIT_MASK(1)
#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTRUN1_MASK BIT_MASK(2)
#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RESUME1_MASK BIT_MASK(3)
#define MTL_VPU_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTVEC_MASK GENMASK(31, 4)
#define VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC 0x06010040u
#define VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTRUN0_MASK BIT_MASK(0)
#define VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RESUME0_MASK BIT_MASK(1)
#define VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTRUN1_MASK BIT_MASK(2)
#define VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RESUME1_MASK BIT_MASK(3)
#define VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC_IRQI_RSTVEC_MASK GENMASK(31, 4)
#define MTL_VPU_CPU_SS_TIM_WATCHDOG 0x0602009cu
#define MTL_VPU_CPU_SS_TIM_WDOG_EN 0x060200a4u
#define MTL_VPU_CPU_SS_TIM_SAFE 0x060200a8u
#define MTL_VPU_CPU_SS_TIM_IPC_FIFO 0x060200f0u
#define VPU_37XX_CPU_SS_TIM_WATCHDOG 0x0602009cu
#define VPU_37XX_CPU_SS_TIM_WDOG_EN 0x060200a4u
#define VPU_37XX_CPU_SS_TIM_SAFE 0x060200a8u
#define VPU_37XX_CPU_SS_TIM_IPC_FIFO 0x060200f0u
#define MTL_VPU_CPU_SS_TIM_GEN_CONFIG 0x06021008u
#define MTL_VPU_CPU_SS_TIM_GEN_CONFIG_WDOG_TO_INT_CLR_MASK BIT_MASK(9)
#define VPU_37XX_CPU_SS_TIM_GEN_CONFIG 0x06021008u
#define VPU_37XX_CPU_SS_TIM_GEN_CONFIG_WDOG_TO_INT_CLR_MASK BIT_MASK(9)
#define MTL_VPU_CPU_SS_DOORBELL_0 0x06300000u
#define MTL_VPU_CPU_SS_DOORBELL_0_SET_MASK BIT_MASK(0)
#define VPU_37XX_CPU_SS_DOORBELL_0 0x06300000u
#define VPU_37XX_CPU_SS_DOORBELL_0_SET_MASK BIT_MASK(0)
#define MTL_VPU_CPU_SS_DOORBELL_1 0x06301000u
#define VPU_37XX_CPU_SS_DOORBELL_1 0x06301000u
#endif /* __IVPU_HW_MTL_REG_H__ */
#endif /* __IVPU_HW_37XX_REG_H__ */

View File

@ -126,6 +126,10 @@ static void ivpu_hw_wa_init(struct ivpu_device *vdev)
if (ivpu_hw_gen(vdev) == IVPU_HW_40XX)
vdev->wa.disable_clock_relinquish = true;
IVPU_PRINT_WA(punit_disabled);
IVPU_PRINT_WA(clear_runtime_mem);
IVPU_PRINT_WA(disable_clock_relinquish);
}
static void ivpu_hw_timeouts_init(struct ivpu_device *vdev)
@ -135,16 +139,19 @@ static void ivpu_hw_timeouts_init(struct ivpu_device *vdev)
vdev->timeout.jsm = 50000;
vdev->timeout.tdr = 2000000;
vdev->timeout.reschedule_suspend = 1000;
vdev->timeout.autosuspend = -1;
} else if (ivpu_is_simics(vdev)) {
vdev->timeout.boot = 50;
vdev->timeout.jsm = 500;
vdev->timeout.tdr = 10000;
vdev->timeout.reschedule_suspend = 10;
vdev->timeout.autosuspend = -1;
} else {
vdev->timeout.boot = 1000;
vdev->timeout.jsm = 500;
vdev->timeout.tdr = 2000;
vdev->timeout.reschedule_suspend = 10;
vdev->timeout.autosuspend = 10;
}
}

View File

@ -426,15 +426,20 @@ int ivpu_ipc_irq_handler(struct ivpu_device *vdev)
int ivpu_ipc_init(struct ivpu_device *vdev)
{
struct ivpu_ipc_info *ipc = vdev->ipc;
int ret = -ENOMEM;
int ret;
ipc->mem_tx = ivpu_bo_alloc_internal(vdev, 0, SZ_16K, DRM_IVPU_BO_WC);
if (!ipc->mem_tx)
return ret;
if (!ipc->mem_tx) {
ivpu_err(vdev, "Failed to allocate mem_tx\n");
return -ENOMEM;
}
ipc->mem_rx = ivpu_bo_alloc_internal(vdev, 0, SZ_16K, DRM_IVPU_BO_WC);
if (!ipc->mem_rx)
if (!ipc->mem_rx) {
ivpu_err(vdev, "Failed to allocate mem_rx\n");
ret = -ENOMEM;
goto err_free_tx;
}
ipc->mm_tx = devm_gen_pool_create(vdev->drm.dev, __ffs(IVPU_IPC_ALIGNMENT),
-1, "TX_IPC_JSM");

View File

@ -7,12 +7,45 @@
#include <linux/highmem.h>
#include "ivpu_drv.h"
#include "ivpu_hw_37xx_reg.h"
#include "ivpu_hw_reg_io.h"
#include "ivpu_mmu.h"
#include "ivpu_mmu_context.h"
#include "ivpu_pm.h"
#define IVPU_MMU_REG_IDR0 0x00200000u
#define IVPU_MMU_REG_IDR1 0x00200004u
#define IVPU_MMU_REG_IDR3 0x0020000cu
#define IVPU_MMU_REG_IDR5 0x00200014u
#define IVPU_MMU_REG_CR0 0x00200020u
#define IVPU_MMU_REG_CR0ACK 0x00200024u
#define IVPU_MMU_REG_CR1 0x00200028u
#define IVPU_MMU_REG_CR2 0x0020002cu
#define IVPU_MMU_REG_IRQ_CTRL 0x00200050u
#define IVPU_MMU_REG_IRQ_CTRLACK 0x00200054u
#define IVPU_MMU_REG_GERROR 0x00200060u
#define IVPU_MMU_REG_GERROR_CMDQ_MASK BIT_MASK(0)
#define IVPU_MMU_REG_GERROR_EVTQ_ABT_MASK BIT_MASK(2)
#define IVPU_MMU_REG_GERROR_PRIQ_ABT_MASK BIT_MASK(3)
#define IVPU_MMU_REG_GERROR_MSI_CMDQ_ABT_MASK BIT_MASK(4)
#define IVPU_MMU_REG_GERROR_MSI_EVTQ_ABT_MASK BIT_MASK(5)
#define IVPU_MMU_REG_GERROR_MSI_PRIQ_ABT_MASK BIT_MASK(6)
#define IVPU_MMU_REG_GERROR_MSI_ABT_MASK BIT_MASK(7)
#define IVPU_MMU_REG_GERRORN 0x00200064u
#define IVPU_MMU_REG_STRTAB_BASE 0x00200080u
#define IVPU_MMU_REG_STRTAB_BASE_CFG 0x00200088u
#define IVPU_MMU_REG_CMDQ_BASE 0x00200090u
#define IVPU_MMU_REG_CMDQ_PROD 0x00200098u
#define IVPU_MMU_REG_CMDQ_CONS 0x0020009cu
#define IVPU_MMU_REG_EVTQ_BASE 0x002000a0u
#define IVPU_MMU_REG_EVTQ_PROD 0x002000a8u
#define IVPU_MMU_REG_EVTQ_CONS 0x002000acu
#define IVPU_MMU_REG_EVTQ_PROD_SEC (0x002000a8u + SZ_64K)
#define IVPU_MMU_REG_EVTQ_CONS_SEC (0x002000acu + SZ_64K)
#define IVPU_MMU_REG_CMDQ_CONS_ERR_MASK GENMASK(30, 24)
#define IVPU_MMU_IDR0_REF 0x080f3e0f
#define IVPU_MMU_IDR0_REF_SIMICS 0x080f3e1f
#define IVPU_MMU_IDR1_REF 0x0e739d18
@ -186,13 +219,13 @@
#define IVPU_MMU_REG_TIMEOUT_US (10 * USEC_PER_MSEC)
#define IVPU_MMU_QUEUE_TIMEOUT_US (100 * USEC_PER_MSEC)
#define IVPU_MMU_GERROR_ERR_MASK ((REG_FLD(VPU_37XX_HOST_MMU_GERROR, CMDQ)) | \
(REG_FLD(VPU_37XX_HOST_MMU_GERROR, EVTQ_ABT)) | \
(REG_FLD(VPU_37XX_HOST_MMU_GERROR, PRIQ_ABT)) | \
(REG_FLD(VPU_37XX_HOST_MMU_GERROR, MSI_CMDQ_ABT)) | \
(REG_FLD(VPU_37XX_HOST_MMU_GERROR, MSI_EVTQ_ABT)) | \
(REG_FLD(VPU_37XX_HOST_MMU_GERROR, MSI_PRIQ_ABT)) | \
(REG_FLD(VPU_37XX_HOST_MMU_GERROR, MSI_ABT)))
#define IVPU_MMU_GERROR_ERR_MASK ((REG_FLD(IVPU_MMU_REG_GERROR, CMDQ)) | \
(REG_FLD(IVPU_MMU_REG_GERROR, EVTQ_ABT)) | \
(REG_FLD(IVPU_MMU_REG_GERROR, PRIQ_ABT)) | \
(REG_FLD(IVPU_MMU_REG_GERROR, MSI_CMDQ_ABT)) | \
(REG_FLD(IVPU_MMU_REG_GERROR, MSI_EVTQ_ABT)) | \
(REG_FLD(IVPU_MMU_REG_GERROR, MSI_PRIQ_ABT)) | \
(REG_FLD(IVPU_MMU_REG_GERROR, MSI_ABT)))
static char *ivpu_mmu_event_to_str(u32 cmd)
{
@ -250,15 +283,15 @@ static void ivpu_mmu_config_check(struct ivpu_device *vdev)
else
val_ref = IVPU_MMU_IDR0_REF;
val = REGV_RD32(VPU_37XX_HOST_MMU_IDR0);
val = REGV_RD32(IVPU_MMU_REG_IDR0);
if (val != val_ref)
ivpu_dbg(vdev, MMU, "IDR0 0x%x != IDR0_REF 0x%x\n", val, val_ref);
val = REGV_RD32(VPU_37XX_HOST_MMU_IDR1);
val = REGV_RD32(IVPU_MMU_REG_IDR1);
if (val != IVPU_MMU_IDR1_REF)
ivpu_dbg(vdev, MMU, "IDR1 0x%x != IDR1_REF 0x%x\n", val, IVPU_MMU_IDR1_REF);
val = REGV_RD32(VPU_37XX_HOST_MMU_IDR3);
val = REGV_RD32(IVPU_MMU_REG_IDR3);
if (val != IVPU_MMU_IDR3_REF)
ivpu_dbg(vdev, MMU, "IDR3 0x%x != IDR3_REF 0x%x\n", val, IVPU_MMU_IDR3_REF);
@ -269,7 +302,7 @@ static void ivpu_mmu_config_check(struct ivpu_device *vdev)
else
val_ref = IVPU_MMU_IDR5_REF;
val = REGV_RD32(VPU_37XX_HOST_MMU_IDR5);
val = REGV_RD32(IVPU_MMU_REG_IDR5);
if (val != val_ref)
ivpu_dbg(vdev, MMU, "IDR5 0x%x != IDR5_REF 0x%x\n", val, val_ref);
}
@ -396,18 +429,18 @@ static int ivpu_mmu_irqs_setup(struct ivpu_device *vdev)
u32 irq_ctrl = IVPU_MMU_IRQ_EVTQ_EN | IVPU_MMU_IRQ_GERROR_EN;
int ret;
ret = ivpu_mmu_reg_write(vdev, VPU_37XX_HOST_MMU_IRQ_CTRL, 0);
ret = ivpu_mmu_reg_write(vdev, IVPU_MMU_REG_IRQ_CTRL, 0);
if (ret)
return ret;
return ivpu_mmu_reg_write(vdev, VPU_37XX_HOST_MMU_IRQ_CTRL, irq_ctrl);
return ivpu_mmu_reg_write(vdev, IVPU_MMU_REG_IRQ_CTRL, irq_ctrl);
}
static int ivpu_mmu_cmdq_wait_for_cons(struct ivpu_device *vdev)
{
struct ivpu_mmu_queue *cmdq = &vdev->mmu->cmdq;
return REGV_POLL(VPU_37XX_HOST_MMU_CMDQ_CONS, cmdq->cons, (cmdq->prod == cmdq->cons),
return REGV_POLL(IVPU_MMU_REG_CMDQ_CONS, cmdq->cons, (cmdq->prod == cmdq->cons),
IVPU_MMU_QUEUE_TIMEOUT_US);
}
@ -447,7 +480,7 @@ static int ivpu_mmu_cmdq_sync(struct ivpu_device *vdev)
return ret;
clflush_cache_range(q->base, IVPU_MMU_CMDQ_SIZE);
REGV_WR32(VPU_37XX_HOST_MMU_CMDQ_PROD, q->prod);
REGV_WR32(IVPU_MMU_REG_CMDQ_PROD, q->prod);
ret = ivpu_mmu_cmdq_wait_for_cons(vdev);
if (ret)
@ -495,7 +528,7 @@ static int ivpu_mmu_reset(struct ivpu_device *vdev)
mmu->evtq.prod = 0;
mmu->evtq.cons = 0;
ret = ivpu_mmu_reg_write(vdev, VPU_37XX_HOST_MMU_CR0, 0);
ret = ivpu_mmu_reg_write(vdev, IVPU_MMU_REG_CR0, 0);
if (ret)
return ret;
@ -505,17 +538,17 @@ static int ivpu_mmu_reset(struct ivpu_device *vdev)
FIELD_PREP(IVPU_MMU_CR1_QUEUE_SH, IVPU_MMU_SH_ISH) |
FIELD_PREP(IVPU_MMU_CR1_QUEUE_OC, IVPU_MMU_CACHE_WB) |
FIELD_PREP(IVPU_MMU_CR1_QUEUE_IC, IVPU_MMU_CACHE_WB);
REGV_WR32(VPU_37XX_HOST_MMU_CR1, val);
REGV_WR32(IVPU_MMU_REG_CR1, val);
REGV_WR64(VPU_37XX_HOST_MMU_STRTAB_BASE, mmu->strtab.dma_q);
REGV_WR32(VPU_37XX_HOST_MMU_STRTAB_BASE_CFG, mmu->strtab.base_cfg);
REGV_WR64(IVPU_MMU_REG_STRTAB_BASE, mmu->strtab.dma_q);
REGV_WR32(IVPU_MMU_REG_STRTAB_BASE_CFG, mmu->strtab.base_cfg);
REGV_WR64(VPU_37XX_HOST_MMU_CMDQ_BASE, mmu->cmdq.dma_q);
REGV_WR32(VPU_37XX_HOST_MMU_CMDQ_PROD, 0);
REGV_WR32(VPU_37XX_HOST_MMU_CMDQ_CONS, 0);
REGV_WR64(IVPU_MMU_REG_CMDQ_BASE, mmu->cmdq.dma_q);
REGV_WR32(IVPU_MMU_REG_CMDQ_PROD, 0);
REGV_WR32(IVPU_MMU_REG_CMDQ_CONS, 0);
val = IVPU_MMU_CR0_CMDQEN;
ret = ivpu_mmu_reg_write(vdev, VPU_37XX_HOST_MMU_CR0, val);
ret = ivpu_mmu_reg_write(vdev, IVPU_MMU_REG_CR0, val);
if (ret)
return ret;
@ -531,17 +564,17 @@ static int ivpu_mmu_reset(struct ivpu_device *vdev)
if (ret)
return ret;
REGV_WR64(VPU_37XX_HOST_MMU_EVTQ_BASE, mmu->evtq.dma_q);
REGV_WR32(VPU_37XX_HOST_MMU_EVTQ_PROD_SEC, 0);
REGV_WR32(VPU_37XX_HOST_MMU_EVTQ_CONS_SEC, 0);
REGV_WR64(IVPU_MMU_REG_EVTQ_BASE, mmu->evtq.dma_q);
REGV_WR32(IVPU_MMU_REG_EVTQ_PROD_SEC, 0);
REGV_WR32(IVPU_MMU_REG_EVTQ_CONS_SEC, 0);
val |= IVPU_MMU_CR0_EVTQEN;
ret = ivpu_mmu_reg_write(vdev, VPU_37XX_HOST_MMU_CR0, val);
ret = ivpu_mmu_reg_write(vdev, IVPU_MMU_REG_CR0, val);
if (ret)
return ret;
val |= IVPU_MMU_CR0_ATSCHK;
ret = ivpu_mmu_reg_write(vdev, VPU_37XX_HOST_MMU_CR0, val);
ret = ivpu_mmu_reg_write(vdev, IVPU_MMU_REG_CR0, val);
if (ret)
return ret;
@ -550,7 +583,7 @@ static int ivpu_mmu_reset(struct ivpu_device *vdev)
return ret;
val |= IVPU_MMU_CR0_SMMUEN;
return ivpu_mmu_reg_write(vdev, VPU_37XX_HOST_MMU_CR0, val);
return ivpu_mmu_reg_write(vdev, IVPU_MMU_REG_CR0, val);
}
static void ivpu_mmu_strtab_link_cd(struct ivpu_device *vdev, u32 sid)
@ -801,14 +834,14 @@ static u32 *ivpu_mmu_get_event(struct ivpu_device *vdev)
u32 idx = IVPU_MMU_Q_IDX(evtq->cons);
u32 *evt = evtq->base + (idx * IVPU_MMU_EVTQ_CMD_SIZE);
evtq->prod = REGV_RD32(VPU_37XX_HOST_MMU_EVTQ_PROD_SEC);
evtq->prod = REGV_RD32(IVPU_MMU_REG_EVTQ_PROD_SEC);
if (!CIRC_CNT(IVPU_MMU_Q_IDX(evtq->prod), IVPU_MMU_Q_IDX(evtq->cons), IVPU_MMU_Q_COUNT))
return NULL;
clflush_cache_range(evt, IVPU_MMU_EVTQ_CMD_SIZE);
evtq->cons = (evtq->cons + 1) & IVPU_MMU_Q_WRAP_MASK;
REGV_WR32(VPU_37XX_HOST_MMU_EVTQ_CONS_SEC, evtq->cons);
REGV_WR32(IVPU_MMU_REG_EVTQ_CONS_SEC, evtq->cons);
return evt;
}
@ -841,35 +874,35 @@ void ivpu_mmu_irq_gerr_handler(struct ivpu_device *vdev)
ivpu_dbg(vdev, IRQ, "MMU error\n");
gerror_val = REGV_RD32(VPU_37XX_HOST_MMU_GERROR);
gerrorn_val = REGV_RD32(VPU_37XX_HOST_MMU_GERRORN);
gerror_val = REGV_RD32(IVPU_MMU_REG_GERROR);
gerrorn_val = REGV_RD32(IVPU_MMU_REG_GERRORN);
active = gerror_val ^ gerrorn_val;
if (!(active & IVPU_MMU_GERROR_ERR_MASK))
return;
if (REG_TEST_FLD(VPU_37XX_HOST_MMU_GERROR, MSI_ABT, active))
if (REG_TEST_FLD(IVPU_MMU_REG_GERROR, MSI_ABT, active))
ivpu_warn_ratelimited(vdev, "MMU MSI ABT write aborted\n");
if (REG_TEST_FLD(VPU_37XX_HOST_MMU_GERROR, MSI_PRIQ_ABT, active))
if (REG_TEST_FLD(IVPU_MMU_REG_GERROR, MSI_PRIQ_ABT, active))
ivpu_warn_ratelimited(vdev, "MMU PRIQ MSI ABT write aborted\n");
if (REG_TEST_FLD(VPU_37XX_HOST_MMU_GERROR, MSI_EVTQ_ABT, active))
if (REG_TEST_FLD(IVPU_MMU_REG_GERROR, MSI_EVTQ_ABT, active))
ivpu_warn_ratelimited(vdev, "MMU EVTQ MSI ABT write aborted\n");
if (REG_TEST_FLD(VPU_37XX_HOST_MMU_GERROR, MSI_CMDQ_ABT, active))
if (REG_TEST_FLD(IVPU_MMU_REG_GERROR, MSI_CMDQ_ABT, active))
ivpu_warn_ratelimited(vdev, "MMU CMDQ MSI ABT write aborted\n");
if (REG_TEST_FLD(VPU_37XX_HOST_MMU_GERROR, PRIQ_ABT, active))
if (REG_TEST_FLD(IVPU_MMU_REG_GERROR, PRIQ_ABT, active))
ivpu_err_ratelimited(vdev, "MMU PRIQ write aborted\n");
if (REG_TEST_FLD(VPU_37XX_HOST_MMU_GERROR, EVTQ_ABT, active))
if (REG_TEST_FLD(IVPU_MMU_REG_GERROR, EVTQ_ABT, active))
ivpu_err_ratelimited(vdev, "MMU EVTQ write aborted\n");
if (REG_TEST_FLD(VPU_37XX_HOST_MMU_GERROR, CMDQ, active))
if (REG_TEST_FLD(IVPU_MMU_REG_GERROR, CMDQ, active))
ivpu_err_ratelimited(vdev, "MMU CMDQ write aborted\n");
REGV_WR32(VPU_37XX_HOST_MMU_GERRORN, gerror_val);
REGV_WR32(IVPU_MMU_REG_GERRORN, gerror_val);
}
int ivpu_mmu_set_pgtable(struct ivpu_device *vdev, int ssid, struct ivpu_mmu_pgtable *pgtable)

View File

@ -427,8 +427,10 @@ ivpu_mmu_context_init(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx, u3
INIT_LIST_HEAD(&ctx->bo_list);
ret = ivpu_mmu_pgtable_init(vdev, &ctx->pgtable);
if (ret)
if (ret) {
ivpu_err(vdev, "Failed to initialize pgtable for ctx %u: %d\n", context_id, ret);
return ret;
}
if (!context_id) {
start = vdev->hw->ranges.global.start;
@ -467,6 +469,16 @@ void ivpu_mmu_global_context_fini(struct ivpu_device *vdev)
return ivpu_mmu_context_fini(vdev, &vdev->gctx);
}
int ivpu_mmu_reserved_context_init(struct ivpu_device *vdev)
{
return ivpu_mmu_user_context_init(vdev, &vdev->rctx, IVPU_RESERVED_CONTEXT_MMU_SSID);
}
void ivpu_mmu_reserved_context_fini(struct ivpu_device *vdev)
{
return ivpu_mmu_user_context_fini(vdev, &vdev->rctx);
}
void ivpu_mmu_user_context_mark_invalid(struct ivpu_device *vdev, u32 ssid)
{
struct ivpu_file_priv *file_priv;
@ -488,13 +500,13 @@ int ivpu_mmu_user_context_init(struct ivpu_device *vdev, struct ivpu_mmu_context
ret = ivpu_mmu_context_init(vdev, ctx, ctx_id);
if (ret) {
ivpu_err(vdev, "Failed to initialize context: %d\n", ret);
ivpu_err(vdev, "Failed to initialize context %u: %d\n", ctx_id, ret);
return ret;
}
ret = ivpu_mmu_set_pgtable(vdev, ctx_id, &ctx->pgtable);
if (ret) {
ivpu_err(vdev, "Failed to set page table: %d\n", ret);
ivpu_err(vdev, "Failed to set page table for context %u: %d\n", ctx_id, ret);
goto err_context_fini;
}

View File

@ -32,6 +32,8 @@ struct ivpu_mmu_context {
int ivpu_mmu_global_context_init(struct ivpu_device *vdev);
void ivpu_mmu_global_context_fini(struct ivpu_device *vdev);
int ivpu_mmu_reserved_context_init(struct ivpu_device *vdev);
void ivpu_mmu_reserved_context_fini(struct ivpu_device *vdev);
int ivpu_mmu_user_context_init(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx, u32 ctx_id);
void ivpu_mmu_user_context_fini(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx);

View File

@ -282,10 +282,11 @@ void ivpu_pm_reset_done_cb(struct pci_dev *pdev)
pm_runtime_put_autosuspend(vdev->drm.dev);
}
int ivpu_pm_init(struct ivpu_device *vdev)
void ivpu_pm_init(struct ivpu_device *vdev)
{
struct device *dev = vdev->drm.dev;
struct ivpu_pm_info *pm = vdev->pm;
int delay;
pm->vdev = vdev;
pm->suspend_reschedule_counter = PM_RESCHEDULE_LIMIT;
@ -293,16 +294,15 @@ int ivpu_pm_init(struct ivpu_device *vdev)
atomic_set(&pm->in_reset, 0);
INIT_WORK(&pm->recovery_work, ivpu_pm_recovery_work);
pm_runtime_use_autosuspend(dev);
if (ivpu_disable_recovery)
pm_runtime_set_autosuspend_delay(dev, -1);
else if (ivpu_is_silicon(vdev))
pm_runtime_set_autosuspend_delay(dev, 100);
delay = -1;
else
pm_runtime_set_autosuspend_delay(dev, 60000);
delay = vdev->timeout.autosuspend;
return 0;
pm_runtime_use_autosuspend(dev);
pm_runtime_set_autosuspend_delay(dev, delay);
ivpu_dbg(vdev, PM, "Autosuspend delay = %d\n", delay);
}
void ivpu_pm_cancel_recovery(struct ivpu_device *vdev)

View File

@ -19,7 +19,7 @@ struct ivpu_pm_info {
u32 suspend_reschedule_counter;
};
int ivpu_pm_init(struct ivpu_device *vdev);
void ivpu_pm_init(struct ivpu_device *vdev);
void ivpu_pm_enable(struct ivpu_device *vdev);
void ivpu_pm_disable(struct ivpu_device *vdev);
void ivpu_pm_cancel_recovery(struct ivpu_device *vdev);

View File

@ -219,7 +219,7 @@ static void dm_helpers_construct_old_payload(
/* Set correct time_slots/PBN of old payload.
* other fields (delete & dsc_enabled) in
* struct drm_dp_mst_atomic_payload are don't care fields
* while calling drm_dp_remove_payload()
* while calling drm_dp_remove_payload_part2()
*/
for (i = 0; i < current_link_table.stream_count; i++) {
dc_alloc =
@ -263,13 +263,12 @@ bool dm_helpers_dp_mst_write_payload_allocation_table(
mst_mgr = &aconnector->mst_root->mst_mgr;
mst_state = to_drm_dp_mst_topology_state(mst_mgr->base.state);
/* It's OK for this to fail */
new_payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->mst_output_port);
if (enable) {
target_payload = new_payload;
/* It's OK for this to fail */
drm_dp_add_payload_part1(mst_mgr, mst_state, new_payload);
} else {
/* construct old payload by VCPI*/
@ -277,7 +276,7 @@ bool dm_helpers_dp_mst_write_payload_allocation_table(
new_payload, &old_payload);
target_payload = &old_payload;
drm_dp_remove_payload(mst_mgr, mst_state, &old_payload, new_payload);
drm_dp_remove_payload_part1(mst_mgr, mst_state, new_payload);
}
/* mst_mgr->->payloads are VC payload notify MST branch using DPCD or
@ -344,7 +343,7 @@ bool dm_helpers_dp_mst_send_payload_allocation(
struct amdgpu_dm_connector *aconnector;
struct drm_dp_mst_topology_state *mst_state;
struct drm_dp_mst_topology_mgr *mst_mgr;
struct drm_dp_mst_atomic_payload *payload;
struct drm_dp_mst_atomic_payload *new_payload, *old_payload;
enum mst_progress_status set_flag = MST_ALLOCATE_NEW_PAYLOAD;
enum mst_progress_status clr_flag = MST_CLEAR_ALLOCATED_PAYLOAD;
int ret = 0;
@ -357,15 +356,20 @@ bool dm_helpers_dp_mst_send_payload_allocation(
mst_mgr = &aconnector->mst_root->mst_mgr;
mst_state = to_drm_dp_mst_topology_state(mst_mgr->base.state);
payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->mst_output_port);
new_payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->mst_output_port);
if (!enable) {
set_flag = MST_CLEAR_ALLOCATED_PAYLOAD;
clr_flag = MST_ALLOCATE_NEW_PAYLOAD;
}
if (enable)
ret = drm_dp_add_payload_part2(mst_mgr, mst_state->base.state, payload);
if (enable) {
ret = drm_dp_add_payload_part2(mst_mgr, mst_state->base.state, new_payload);
} else {
dm_helpers_construct_old_payload(stream->link, mst_state->pbn_div,
new_payload, old_payload);
drm_dp_remove_payload_part2(mst_mgr, mst_state, old_payload, new_payload);
}
if (ret) {
amdgpu_dm_set_mst_status(&aconnector->mst_status,

View File

@ -1223,7 +1223,7 @@ int komeda_build_display_data_flow(struct komeda_crtc *kcrtc,
return 0;
}
static void
static int
komeda_pipeline_unbound_components(struct komeda_pipeline *pipe,
struct komeda_pipeline_state *new)
{
@ -1243,8 +1243,12 @@ komeda_pipeline_unbound_components(struct komeda_pipeline *pipe,
c = komeda_pipeline_get_component(pipe, id);
c_st = komeda_component_get_state_and_set_user(c,
drm_st, NULL, new->crtc);
if (PTR_ERR(c_st) == -EDEADLK)
return -EDEADLK;
WARN_ON(IS_ERR(c_st));
}
return 0;
}
/* release unclaimed pipeline resource */
@ -1266,9 +1270,8 @@ int komeda_release_unclaimed_resources(struct komeda_pipeline *pipe,
if (WARN_ON(IS_ERR_OR_NULL(st)))
return -EINVAL;
komeda_pipeline_unbound_components(pipe, st);
return komeda_pipeline_unbound_components(pipe, st);
return 0;
}
/* Since standalone disabled components must be disabled separately and in the

View File

@ -181,6 +181,7 @@ config DRM_NWL_MIPI_DSI
select DRM_KMS_HELPER
select DRM_MIPI_DSI
select DRM_PANEL_BRIDGE
select GENERIC_PHY
select GENERIC_PHY_MIPI_DPHY
select MFD_SYSCON
select MULTIPLEXER
@ -227,6 +228,7 @@ config DRM_SAMSUNG_DSIM
select DRM_KMS_HELPER
select DRM_MIPI_DSI
select DRM_PANEL_BRIDGE
select GENERIC_PHY
select GENERIC_PHY_MIPI_DPHY
help
The Samsung MIPI DSIM bridge controller driver.

View File

@ -1231,9 +1231,7 @@ static int anx78xx_i2c_probe(struct i2c_client *client)
mutex_init(&anx78xx->lock);
#if IS_ENABLED(CONFIG_OF)
anx78xx->bridge.of_node = client->dev.of_node;
#endif
anx78xx->client = client;
i2c_set_clientdata(client, anx78xx);
@ -1367,12 +1365,6 @@ static void anx78xx_i2c_remove(struct i2c_client *client)
kfree(anx78xx->edid);
}
static const struct i2c_device_id anx78xx_id[] = {
{ "anx7814", 0 },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(i2c, anx78xx_id);
static const struct of_device_id anx78xx_match_table[] = {
{ .compatible = "analogix,anx7808", .data = anx7808_i2c_addresses },
{ .compatible = "analogix,anx7812", .data = anx781x_i2c_addresses },
@ -1389,7 +1381,6 @@ static struct i2c_driver anx78xx_driver = {
},
.probe = anx78xx_i2c_probe,
.remove = anx78xx_i2c_remove,
.id_table = anx78xx_id,
};
module_i2c_driver(anx78xx_driver);

View File

@ -4,6 +4,7 @@ config DRM_CDNS_DSI
select DRM_KMS_HELPER
select DRM_MIPI_DSI
select DRM_PANEL_BRIDGE
select GENERIC_PHY
select GENERIC_PHY_MIPI_DPHY
depends on OF
help

View File

@ -1447,10 +1447,14 @@ static int it66121_audio_get_eld(struct device *dev, void *data,
struct it66121_ctx *ctx = dev_get_drvdata(dev);
mutex_lock(&ctx->lock);
memcpy(buf, ctx->connector->eld,
min(sizeof(ctx->connector->eld), len));
if (!ctx->connector) {
/* Pass en empty ELD if connector not available */
dev_dbg(dev, "No connector present, passing empty EDID data");
memset(buf, 0, len);
} else {
memcpy(buf, ctx->connector->eld,
min(sizeof(ctx->connector->eld), len));
}
mutex_unlock(&ctx->lock);
return 0;
@ -1501,7 +1505,6 @@ static const char * const it66121_supplies[] = {
static int it66121_probe(struct i2c_client *client)
{
const struct i2c_device_id *id = i2c_client_get_device_id(client);
u32 revision_id, vendor_ids[2] = { 0 }, device_ids[2] = { 0 };
struct device_node *ep;
int ret;
@ -1523,7 +1526,7 @@ static int it66121_probe(struct i2c_client *client)
ctx->dev = dev;
ctx->client = client;
ctx->info = (const struct it66121_chip_info *) id->driver_data;
ctx->info = i2c_get_match_data(client);
of_property_read_u32(ep, "bus-width", &ctx->bus_width);
of_node_put(ep);
@ -1609,13 +1612,6 @@ static void it66121_remove(struct i2c_client *client)
mutex_destroy(&ctx->lock);
}
static const struct of_device_id it66121_dt_match[] = {
{ .compatible = "ite,it66121" },
{ .compatible = "ite,it6610" },
{ }
};
MODULE_DEVICE_TABLE(of, it66121_dt_match);
static const struct it66121_chip_info it66121_chip_info = {
.id = ID_IT66121,
.vid = 0x4954,
@ -1628,6 +1624,13 @@ static const struct it66121_chip_info it6610_chip_info = {
.pid = 0x0611,
};
static const struct of_device_id it66121_dt_match[] = {
{ .compatible = "ite,it66121", &it66121_chip_info },
{ .compatible = "ite,it6610", &it6610_chip_info },
{ }
};
MODULE_DEVICE_TABLE(of, it66121_dt_match);
static const struct i2c_device_id it66121_id[] = {
{ "it66121", (kernel_ulong_t) &it66121_chip_info },
{ "it6610", (kernel_ulong_t) &it6610_chip_info },

View File

@ -45,7 +45,6 @@ struct lt8912 {
u8 data_lanes;
bool is_power_on;
bool is_attached;
};
static int lt8912_write_init_config(struct lt8912 *lt)
@ -559,6 +558,13 @@ static int lt8912_bridge_attach(struct drm_bridge *bridge,
struct lt8912 *lt = bridge_to_lt8912(bridge);
int ret;
ret = drm_bridge_attach(bridge->encoder, lt->hdmi_port, bridge,
DRM_BRIDGE_ATTACH_NO_CONNECTOR);
if (ret < 0) {
dev_err(lt->dev, "Failed to attach next bridge (%d)\n", ret);
return ret;
}
if (!(flags & DRM_BRIDGE_ATTACH_NO_CONNECTOR)) {
ret = lt8912_bridge_connector_init(bridge);
if (ret) {
@ -575,8 +581,6 @@ static int lt8912_bridge_attach(struct drm_bridge *bridge,
if (ret)
goto error;
lt->is_attached = true;
return 0;
error:
@ -588,15 +592,10 @@ static void lt8912_bridge_detach(struct drm_bridge *bridge)
{
struct lt8912 *lt = bridge_to_lt8912(bridge);
if (lt->is_attached) {
lt8912_hard_power_off(lt);
lt8912_hard_power_off(lt);
if (lt->hdmi_port->ops & DRM_BRIDGE_OP_HPD)
drm_bridge_hpd_disable(lt->hdmi_port);
drm_connector_unregister(&lt->connector);
drm_connector_cleanup(&lt->connector);
}
if (lt->connector.dev && lt->hdmi_port->ops & DRM_BRIDGE_OP_HPD)
drm_bridge_hpd_disable(lt->hdmi_port);
}
static enum drm_connector_status
@ -750,7 +749,6 @@ static void lt8912_remove(struct i2c_client *client)
{
struct lt8912 *lt = i2c_get_clientdata(client);
lt8912_bridge_detach(&lt->bridge);
drm_bridge_remove(&lt->bridge);
lt8912_free_i2c(lt);
lt8912_put_dt(lt);

View File

@ -5,6 +5,7 @@
*/
#include <linux/gpio/consumer.h>
#include <linux/media-bus-format.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_graph.h>
@ -71,12 +72,6 @@ static void lvds_codec_disable(struct drm_bridge *bridge)
"Failed to disable regulator \"vcc\": %d\n", ret);
}
static const struct drm_bridge_funcs funcs = {
.attach = lvds_codec_attach,
.enable = lvds_codec_enable,
.disable = lvds_codec_disable,
};
#define MAX_INPUT_SEL_FORMATS 1
static u32 *
lvds_codec_atomic_get_input_bus_fmts(struct drm_bridge *bridge,
@ -102,7 +97,7 @@ lvds_codec_atomic_get_input_bus_fmts(struct drm_bridge *bridge,
return input_fmts;
}
static const struct drm_bridge_funcs funcs_decoder = {
static const struct drm_bridge_funcs funcs = {
.attach = lvds_codec_attach,
.enable = lvds_codec_enable,
.disable = lvds_codec_disable,
@ -184,8 +179,9 @@ static int lvds_codec_probe(struct platform_device *pdev)
return ret;
} else {
lvds_codec->bus_format = ret;
lvds_codec->bridge.funcs = &funcs_decoder;
}
} else {
lvds_codec->bus_format = MEDIA_BUS_FMT_RGB888_1X24;
}
/*

View File

@ -4,6 +4,8 @@
* Copyright (C) 2017 Broadcom
*/
#include <linux/device.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_bridge.h>
#include <drm/drm_connector.h>
@ -19,6 +21,7 @@ struct panel_bridge {
struct drm_bridge bridge;
struct drm_connector connector;
struct drm_panel *panel;
struct device_link *link;
u32 connector_type;
};
@ -60,6 +63,8 @@ static int panel_bridge_attach(struct drm_bridge *bridge,
{
struct panel_bridge *panel_bridge = drm_bridge_to_panel_bridge(bridge);
struct drm_connector *connector = &panel_bridge->connector;
struct drm_panel *panel = panel_bridge->panel;
struct drm_device *drm_dev = bridge->dev;
int ret;
if (flags & DRM_BRIDGE_ATTACH_NO_CONNECTOR)
@ -70,6 +75,14 @@ static int panel_bridge_attach(struct drm_bridge *bridge,
return -ENODEV;
}
panel_bridge->link = device_link_add(drm_dev->dev, panel->dev,
DL_FLAG_STATELESS);
if (!panel_bridge->link) {
DRM_ERROR("Failed to add device link between %s and %s\n",
dev_name(drm_dev->dev), dev_name(panel->dev));
return -EINVAL;
}
drm_connector_helper_add(connector,
&panel_bridge_connector_helper_funcs);
@ -78,6 +91,7 @@ static int panel_bridge_attach(struct drm_bridge *bridge,
panel_bridge->connector_type);
if (ret) {
DRM_ERROR("Failed to initialize connector\n");
device_link_del(panel_bridge->link);
return ret;
}
@ -100,6 +114,8 @@ static void panel_bridge_detach(struct drm_bridge *bridge)
struct panel_bridge *panel_bridge = drm_bridge_to_panel_bridge(bridge);
struct drm_connector *connector = &panel_bridge->connector;
device_link_del(panel_bridge->link);
/*
* Cleanup the connector if we know it was initialized.
*
@ -302,9 +318,7 @@ struct drm_bridge *drm_panel_bridge_add_typed(struct drm_panel *panel,
panel_bridge->panel = panel;
panel_bridge->bridge.funcs = &panel_bridge_bridge_funcs;
#ifdef CONFIG_OF
panel_bridge->bridge.of_node = panel->dev->of_node;
#endif
panel_bridge->bridge.ops = DRM_BRIDGE_OP_MODES;
panel_bridge->bridge.type = connector_type;

View File

@ -385,7 +385,7 @@ static const unsigned int imx8mm_dsim_reg_values[] = {
[RESET_TYPE] = DSIM_SWRST,
[PLL_TIMER] = 500,
[STOP_STATE_CNT] = 0xf,
[PHYCTRL_ULPS_EXIT] = 0,
[PHYCTRL_ULPS_EXIT] = DSIM_PHYCTRL_ULPS_EXIT(0xaf),
[PHYCTRL_VREG_LP] = 0,
[PHYCTRL_SLEW_UP] = 0,
[PHYTIMING_LPX] = DSIM_PHYTIMING_LPX(0x06),
@ -413,6 +413,7 @@ static const struct samsung_dsim_driver_data exynos3_dsi_driver_data = {
.m_min = 41,
.m_max = 125,
.min_freq = 500,
.has_broken_fifoctrl_emptyhdr = 1,
};
static const struct samsung_dsim_driver_data exynos4_dsi_driver_data = {
@ -429,6 +430,7 @@ static const struct samsung_dsim_driver_data exynos4_dsi_driver_data = {
.m_min = 41,
.m_max = 125,
.min_freq = 500,
.has_broken_fifoctrl_emptyhdr = 1,
};
static const struct samsung_dsim_driver_data exynos5_dsi_driver_data = {
@ -1010,8 +1012,20 @@ static int samsung_dsim_wait_for_hdr_fifo(struct samsung_dsim *dsi)
do {
u32 reg = samsung_dsim_read(dsi, DSIM_FIFOCTRL_REG);
if (reg & DSIM_SFR_HEADER_EMPTY)
return 0;
if (!dsi->driver_data->has_broken_fifoctrl_emptyhdr) {
if (reg & DSIM_SFR_HEADER_EMPTY)
return 0;
} else {
if (!(reg & DSIM_SFR_HEADER_FULL)) {
/*
* Wait a little bit, so the pending data can
* actually leave the FIFO to avoid overflow.
*/
if (!cond_resched())
usleep_range(950, 1050);
return 0;
}
}
if (!cond_resched())
usleep_range(950, 1050);

View File

@ -3541,9 +3541,7 @@ struct dw_hdmi *dw_hdmi_probe(struct platform_device *pdev,
| DRM_BRIDGE_OP_HPD;
hdmi->bridge.interlace_allowed = true;
hdmi->bridge.ddc = hdmi->ddc;
#ifdef CONFIG_OF
hdmi->bridge.of_node = pdev->dev.of_node;
#endif
memset(&pdevinfo, 0, sizeof(pdevinfo));
pdevinfo.parent = dev;

View File

@ -1182,9 +1182,7 @@ __dw_mipi_dsi_probe(struct platform_device *pdev,
dsi->bridge.driver_private = dsi;
dsi->bridge.funcs = &dw_mipi_dsi_bridge_funcs;
#ifdef CONFIG_OF
dsi->bridge.of_node = pdev->dev.of_node;
#endif
return dsi;
}

View File

@ -3255,15 +3255,15 @@ out_get_port:
}
EXPORT_SYMBOL(drm_dp_send_query_stream_enc_status);
static int drm_dp_create_payload_step1(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_atomic_payload *payload)
static int drm_dp_create_payload_at_dfp(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_atomic_payload *payload)
{
return drm_dp_dpcd_write_payload(mgr, payload->vcpi, payload->vc_start_slot,
payload->time_slots);
}
static int drm_dp_create_payload_step2(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_atomic_payload *payload)
static int drm_dp_create_payload_to_remote(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_atomic_payload *payload)
{
int ret;
struct drm_dp_mst_port *port = drm_dp_mst_topology_get_port_validated(mgr, payload->port);
@ -3276,17 +3276,20 @@ static int drm_dp_create_payload_step2(struct drm_dp_mst_topology_mgr *mgr,
return ret;
}
static int drm_dp_destroy_payload_step1(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_topology_state *mst_state,
struct drm_dp_mst_atomic_payload *payload)
static void drm_dp_destroy_payload_at_remote_and_dfp(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_topology_state *mst_state,
struct drm_dp_mst_atomic_payload *payload)
{
drm_dbg_kms(mgr->dev, "\n");
/* it's okay for these to fail */
drm_dp_payload_send_msg(mgr, payload->port, payload->vcpi, 0);
drm_dp_dpcd_write_payload(mgr, payload->vcpi, payload->vc_start_slot, 0);
if (payload->payload_allocation_status == DRM_DP_MST_PAYLOAD_ALLOCATION_REMOTE) {
drm_dp_payload_send_msg(mgr, payload->port, payload->vcpi, 0);
payload->payload_allocation_status = DRM_DP_MST_PAYLOAD_ALLOCATION_DFP;
}
return 0;
if (payload->payload_allocation_status == DRM_DP_MST_PAYLOAD_ALLOCATION_DFP)
drm_dp_dpcd_write_payload(mgr, payload->vcpi, payload->vc_start_slot, 0);
}
/**
@ -3296,81 +3299,105 @@ static int drm_dp_destroy_payload_step1(struct drm_dp_mst_topology_mgr *mgr,
* @payload: The payload to write
*
* Determines the starting time slot for the given payload, and programs the VCPI for this payload
* into hardware. After calling this, the driver should generate ACT and payload packets.
* into the DPCD of DPRX. After calling this, the driver should generate ACT and payload packets.
*
* Returns: 0 on success, error code on failure. In the event that this fails,
* @payload.vc_start_slot will also be set to -1.
* Returns: 0 on success, error code on failure.
*/
int drm_dp_add_payload_part1(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_topology_state *mst_state,
struct drm_dp_mst_atomic_payload *payload)
{
struct drm_dp_mst_port *port;
int ret;
port = drm_dp_mst_topology_get_port_validated(mgr, payload->port);
if (!port) {
drm_dbg_kms(mgr->dev,
"VCPI %d for port %p not in topology, not creating a payload\n",
payload->vcpi, payload->port);
payload->vc_start_slot = -1;
return 0;
}
int ret = 0;
bool allocate = true;
/* Update mst mgr info */
if (mgr->payload_count == 0)
mgr->next_start_slot = mst_state->start_slot;
payload->vc_start_slot = mgr->next_start_slot;
ret = drm_dp_create_payload_step1(mgr, payload);
drm_dp_mst_topology_put_port(port);
if (ret < 0) {
drm_warn(mgr->dev, "Failed to create MST payload for port %p: %d\n",
payload->port, ret);
payload->vc_start_slot = -1;
return ret;
}
mgr->payload_count++;
mgr->next_start_slot += payload->time_slots;
return 0;
/* Allocate payload to immediate downstream facing port */
port = drm_dp_mst_topology_get_port_validated(mgr, payload->port);
if (!port) {
drm_dbg_kms(mgr->dev,
"VCPI %d for port %p not in topology, not creating a payload to remote\n",
payload->vcpi, payload->port);
allocate = false;
}
if (allocate) {
ret = drm_dp_create_payload_at_dfp(mgr, payload);
if (ret < 0)
drm_warn(mgr->dev, "Failed to create MST payload for port %p: %d\n",
payload->port, ret);
}
payload->payload_allocation_status =
(!allocate || ret < 0) ? DRM_DP_MST_PAYLOAD_ALLOCATION_LOCAL :
DRM_DP_MST_PAYLOAD_ALLOCATION_DFP;
drm_dp_mst_topology_put_port(port);
return ret;
}
EXPORT_SYMBOL(drm_dp_add_payload_part1);
/**
* drm_dp_remove_payload() - Remove an MST payload
* drm_dp_remove_payload_part1() - Remove an MST payload along the virtual channel
* @mgr: Manager to use.
* @mst_state: The MST atomic state
* @old_payload: The payload with its old state
* @new_payload: The payload to write
* @payload: The payload to remove
*
* Removes a payload from an MST topology if it was successfully assigned a start slot. Also updates
* the starting time slots of all other payloads which would have been shifted towards the start of
* the VC table as a result. After calling this, the driver should generate ACT and payload packets.
* Removes a payload along the virtual channel if it was successfully allocated.
* After calling this, the driver should set HW to generate ACT and then switch to new
* payload allocation state.
*/
void drm_dp_remove_payload(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_topology_state *mst_state,
const struct drm_dp_mst_atomic_payload *old_payload,
struct drm_dp_mst_atomic_payload *new_payload)
void drm_dp_remove_payload_part1(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_topology_state *mst_state,
struct drm_dp_mst_atomic_payload *payload)
{
struct drm_dp_mst_atomic_payload *pos;
/* Remove remote payload allocation */
bool send_remove = false;
/* We failed to make the payload, so nothing to do */
if (new_payload->vc_start_slot == -1)
return;
mutex_lock(&mgr->lock);
send_remove = drm_dp_mst_port_downstream_of_branch(new_payload->port, mgr->mst_primary);
send_remove = drm_dp_mst_port_downstream_of_branch(payload->port, mgr->mst_primary);
mutex_unlock(&mgr->lock);
if (send_remove)
drm_dp_destroy_payload_step1(mgr, mst_state, new_payload);
drm_dp_destroy_payload_at_remote_and_dfp(mgr, mst_state, payload);
else
drm_dbg_kms(mgr->dev, "Payload for VCPI %d not in topology, not sending remove\n",
new_payload->vcpi);
payload->vcpi);
payload->payload_allocation_status = DRM_DP_MST_PAYLOAD_ALLOCATION_LOCAL;
}
EXPORT_SYMBOL(drm_dp_remove_payload_part1);
/**
* drm_dp_remove_payload_part2() - Remove an MST payload locally
* @mgr: Manager to use.
* @mst_state: The MST atomic state
* @old_payload: The payload with its old state
* @new_payload: The payload with its latest state
*
* Updates the starting time slots of all other payloads which would have been shifted towards
* the start of the payload ID table as a result of removing a payload. Driver should call this
* function whenever it removes a payload in its HW. It's independent to the result of payload
* allocation/deallocation at branch devices along the virtual channel.
*/
void drm_dp_remove_payload_part2(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_topology_state *mst_state,
const struct drm_dp_mst_atomic_payload *old_payload,
struct drm_dp_mst_atomic_payload *new_payload)
{
struct drm_dp_mst_atomic_payload *pos;
/* Remove local payload allocation */
list_for_each_entry(pos, &mst_state->payloads, next) {
if (pos != new_payload && pos->vc_start_slot > new_payload->vc_start_slot)
pos->vc_start_slot -= old_payload->time_slots;
@ -3382,9 +3409,10 @@ void drm_dp_remove_payload(struct drm_dp_mst_topology_mgr *mgr,
if (new_payload->delete)
drm_dp_mst_put_port_malloc(new_payload->port);
}
EXPORT_SYMBOL(drm_dp_remove_payload);
new_payload->payload_allocation_status = DRM_DP_MST_PAYLOAD_ALLOCATION_NONE;
}
EXPORT_SYMBOL(drm_dp_remove_payload_part2);
/**
* drm_dp_add_payload_part2() - Execute payload update part 2
* @mgr: Manager to use.
@ -3403,21 +3431,19 @@ int drm_dp_add_payload_part2(struct drm_dp_mst_topology_mgr *mgr,
int ret = 0;
/* Skip failed payloads */
if (payload->vc_start_slot == -1) {
drm_dbg_kms(mgr->dev, "Part 1 of payload creation for %s failed, skipping part 2\n",
if (payload->payload_allocation_status != DRM_DP_MST_PAYLOAD_ALLOCATION_DFP) {
drm_dbg_kms(state->dev, "Part 1 of payload creation for %s failed, skipping part 2\n",
payload->port->connector->name);
return -EIO;
}
ret = drm_dp_create_payload_step2(mgr, payload);
if (ret < 0) {
if (!payload->delete)
drm_err(mgr->dev, "Step 2 of creating MST payload for %p failed: %d\n",
payload->port, ret);
else
drm_dbg_kms(mgr->dev, "Step 2 of removing MST payload for %p failed: %d\n",
payload->port, ret);
}
/* Allocate payload to remote end */
ret = drm_dp_create_payload_to_remote(mgr, payload);
if (ret < 0)
drm_err(mgr->dev, "Step 2 of creating MST payload for %p failed: %d\n",
payload->port, ret);
else
payload->payload_allocation_status = DRM_DP_MST_PAYLOAD_ALLOCATION_REMOTE;
return ret;
}
@ -4328,6 +4354,7 @@ int drm_dp_atomic_find_time_slots(struct drm_atomic_state *state,
drm_dp_mst_get_port_malloc(port);
payload->port = port;
payload->vc_start_slot = -1;
payload->payload_allocation_status = DRM_DP_MST_PAYLOAD_ALLOCATION_NONE;
list_add(&payload->next, &topology_state->payloads);
}
payload->time_slots = req_slots;
@ -4497,7 +4524,7 @@ void drm_dp_mst_atomic_wait_for_dependencies(struct drm_atomic_state *state)
}
/* Now that previous state is committed, it's safe to copy over the start slot
* assignments
* and allocation status assignments
*/
list_for_each_entry(old_payload, &old_mst_state->payloads, next) {
if (old_payload->delete)
@ -4506,6 +4533,8 @@ void drm_dp_mst_atomic_wait_for_dependencies(struct drm_atomic_state *state)
new_payload = drm_atomic_get_mst_payload_state(new_mst_state,
old_payload->port);
new_payload->vc_start_slot = old_payload->vc_start_slot;
new_payload->payload_allocation_status =
old_payload->payload_allocation_status;
}
}
}
@ -4822,6 +4851,13 @@ void drm_dp_mst_dump_topology(struct seq_file *m,
struct drm_dp_mst_atomic_payload *payload;
int i, ret;
static const char *const status[] = {
"None",
"Local",
"DFP",
"Remote",
};
mutex_lock(&mgr->lock);
if (mgr->mst_primary)
drm_dp_mst_dump_mstb(m, mgr->mst_primary);
@ -4838,7 +4874,7 @@ void drm_dp_mst_dump_topology(struct seq_file *m,
seq_printf(m, "payload_mask: %x, max_payloads: %d, start_slot: %u, pbn_div: %d\n",
state->payload_mask, mgr->max_payloads, state->start_slot, state->pbn_div);
seq_printf(m, "\n| idx | port | vcpi | slots | pbn | dsc | sink name |\n");
seq_printf(m, "\n| idx | port | vcpi | slots | pbn | dsc | status | sink name |\n");
for (i = 0; i < mgr->max_payloads; i++) {
list_for_each_entry(payload, &state->payloads, next) {
char name[14];
@ -4847,7 +4883,7 @@ void drm_dp_mst_dump_topology(struct seq_file *m,
continue;
fetch_monitor_name(mgr, payload->port, name, sizeof(name));
seq_printf(m, " %5d %6d %6d %02d - %02d %5d %5s %19s\n",
seq_printf(m, " %5d %6d %6d %02d - %02d %5d %5s %8s %19s\n",
i,
payload->port->port_num,
payload->vcpi,
@ -4855,6 +4891,7 @@ void drm_dp_mst_dump_topology(struct seq_file *m,
payload->vc_start_slot + payload->time_slots - 1,
payload->pbn,
payload->dsc_enabled ? "Y" : "N",
status[payload->payload_allocation_status],
(*name != 0) ? name : "Unknown");
}
}

View File

@ -1841,9 +1841,9 @@ static const struct drm_debugfs_info drm_atomic_debugfs_list[] = {
{"state", drm_state_info, 0},
};
void drm_atomic_debugfs_init(struct drm_minor *minor)
void drm_atomic_debugfs_init(struct drm_device *dev)
{
drm_debugfs_add_files(minor->dev, drm_atomic_debugfs_list,
drm_debugfs_add_files(dev, drm_atomic_debugfs_list,
ARRAY_SIZE(drm_atomic_debugfs_list));
}
#endif

View File

@ -1384,9 +1384,9 @@ static const struct drm_debugfs_info drm_bridge_debugfs_list[] = {
{ "bridge_chains", drm_bridge_chains_info, 0 },
};
void drm_bridge_debugfs_init(struct drm_minor *minor)
void drm_bridge_debugfs_init(struct drm_device *dev)
{
drm_debugfs_add_files(minor->dev, drm_bridge_debugfs_list,
drm_debugfs_add_files(dev, drm_bridge_debugfs_list,
ARRAY_SIZE(drm_bridge_debugfs_list));
}
#endif

View File

@ -535,9 +535,9 @@ static const struct drm_debugfs_info drm_client_debugfs_list[] = {
{ "internal_clients", drm_client_debugfs_internal_clients, 0 },
};
void drm_client_debugfs_init(struct drm_minor *minor)
void drm_client_debugfs_init(struct drm_device *dev)
{
drm_debugfs_add_files(minor->dev, drm_client_debugfs_list,
drm_debugfs_add_files(dev, drm_client_debugfs_list,
ARRAY_SIZE(drm_client_debugfs_list));
}
#endif

View File

@ -232,7 +232,7 @@ int drm_mode_dirtyfb_ioctl(struct drm_device *dev,
/* drm_atomic.c */
#ifdef CONFIG_DEBUG_FS
struct drm_minor;
void drm_atomic_debugfs_init(struct drm_minor *minor);
void drm_atomic_debugfs_init(struct drm_device *dev);
#endif
int __drm_atomic_helper_disable_plane(struct drm_plane *plane,

View File

@ -150,6 +150,9 @@ static int drm_debugfs_open(struct inode *inode, struct file *file)
{
struct drm_info_node *node = inode->i_private;
if (!device_is_registered(node->minor->kdev))
return -ENODEV;
return single_open(file, node->info_ent->show, node);
}
@ -157,6 +160,10 @@ static int drm_debugfs_entry_open(struct inode *inode, struct file *file)
{
struct drm_debugfs_entry *entry = inode->i_private;
struct drm_debugfs_info *node = &entry->file;
struct drm_minor *minor = entry->dev->primary ?: entry->dev->accel;
if (!device_is_registered(minor->kdev))
return -ENODEV;
return single_open(file, node->show, entry);
}
@ -227,7 +234,7 @@ EXPORT_SYMBOL(drm_debugfs_gpuva_info);
*
* Create a given set of debugfs files represented by an array of
* &struct drm_info_list in the given root directory. These files will be removed
* automatically on drm_debugfs_cleanup().
* automatically on drm_debugfs_dev_fini().
*/
void drm_debugfs_create_files(const struct drm_info_list *files, int count,
struct dentry *root, struct drm_minor *minor)
@ -242,7 +249,7 @@ void drm_debugfs_create_files(const struct drm_info_list *files, int count,
if (features && !drm_core_check_all_features(dev, features))
continue;
tmp = kmalloc(sizeof(struct drm_info_node), GFP_KERNEL);
tmp = drmm_kzalloc(dev, sizeof(*tmp), GFP_KERNEL);
if (tmp == NULL)
continue;
@ -251,111 +258,89 @@ void drm_debugfs_create_files(const struct drm_info_list *files, int count,
0444, root, tmp,
&drm_debugfs_fops);
tmp->info_ent = &files[i];
mutex_lock(&minor->debugfs_lock);
list_add(&tmp->list, &minor->debugfs_list);
mutex_unlock(&minor->debugfs_lock);
}
}
EXPORT_SYMBOL(drm_debugfs_create_files);
int drm_debugfs_init(struct drm_minor *minor, int minor_id,
struct dentry *root)
{
struct drm_device *dev = minor->dev;
struct drm_debugfs_entry *entry, *tmp;
char name[64];
INIT_LIST_HEAD(&minor->debugfs_list);
mutex_init(&minor->debugfs_lock);
sprintf(name, "%d", minor_id);
minor->debugfs_root = debugfs_create_dir(name, root);
drm_debugfs_add_files(minor->dev, drm_debugfs_list, DRM_DEBUGFS_ENTRIES);
if (drm_drv_uses_atomic_modeset(dev)) {
drm_atomic_debugfs_init(minor);
drm_bridge_debugfs_init(minor);
}
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
drm_framebuffer_debugfs_init(minor);
drm_client_debugfs_init(minor);
}
if (dev->driver->debugfs_init)
dev->driver->debugfs_init(minor);
list_for_each_entry_safe(entry, tmp, &dev->debugfs_list, list) {
debugfs_create_file(entry->file.name, 0444,
minor->debugfs_root, entry, &drm_debugfs_entry_fops);
list_del(&entry->list);
}
return 0;
}
void drm_debugfs_late_register(struct drm_device *dev)
{
struct drm_minor *minor = dev->primary;
struct drm_debugfs_entry *entry, *tmp;
if (!minor)
return;
list_for_each_entry_safe(entry, tmp, &dev->debugfs_list, list) {
debugfs_create_file(entry->file.name, 0444,
minor->debugfs_root, entry, &drm_debugfs_entry_fops);
list_del(&entry->list);
}
}
int drm_debugfs_remove_files(const struct drm_info_list *files, int count,
struct drm_minor *minor)
struct dentry *root, struct drm_minor *minor)
{
struct list_head *pos, *q;
struct drm_info_node *tmp;
int i;
mutex_lock(&minor->debugfs_lock);
for (i = 0; i < count; i++) {
list_for_each_safe(pos, q, &minor->debugfs_list) {
tmp = list_entry(pos, struct drm_info_node, list);
if (tmp->info_ent == &files[i]) {
debugfs_remove(tmp->dent);
list_del(pos);
kfree(tmp);
}
}
struct dentry *dent = debugfs_lookup(files[i].name, root);
if (!dent)
continue;
drmm_kfree(minor->dev, d_inode(dent)->i_private);
debugfs_remove(dent);
}
mutex_unlock(&minor->debugfs_lock);
return 0;
}
EXPORT_SYMBOL(drm_debugfs_remove_files);
static void drm_debugfs_remove_all_files(struct drm_minor *minor)
/**
* drm_debugfs_dev_init - create debugfs directory for the device
* @dev: the device which we want to create the directory for
* @root: the parent directory depending on the device type
*
* Creates the debugfs directory for the device under the given root directory.
*/
void drm_debugfs_dev_init(struct drm_device *dev, struct dentry *root)
{
struct drm_info_node *node, *tmp;
mutex_lock(&minor->debugfs_lock);
list_for_each_entry_safe(node, tmp, &minor->debugfs_list, list) {
debugfs_remove(node->dent);
list_del(&node->list);
kfree(node);
}
mutex_unlock(&minor->debugfs_lock);
dev->debugfs_root = debugfs_create_dir(dev->unique, root);
}
void drm_debugfs_cleanup(struct drm_minor *minor)
/**
* drm_debugfs_dev_fini - cleanup debugfs directory
* @dev: the device to cleanup the debugfs stuff
*
* Remove the debugfs directory, might be called multiple times.
*/
void drm_debugfs_dev_fini(struct drm_device *dev)
{
if (!minor->debugfs_root)
return;
debugfs_remove_recursive(dev->debugfs_root);
dev->debugfs_root = NULL;
}
drm_debugfs_remove_all_files(minor);
void drm_debugfs_dev_register(struct drm_device *dev)
{
drm_debugfs_add_files(dev, drm_debugfs_list, DRM_DEBUGFS_ENTRIES);
debugfs_remove_recursive(minor->debugfs_root);
minor->debugfs_root = NULL;
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
drm_framebuffer_debugfs_init(dev);
drm_client_debugfs_init(dev);
}
if (drm_drv_uses_atomic_modeset(dev)) {
drm_atomic_debugfs_init(dev);
drm_bridge_debugfs_init(dev);
}
}
int drm_debugfs_register(struct drm_minor *minor, int minor_id,
struct dentry *root)
{
struct drm_device *dev = minor->dev;
char name[64];
sprintf(name, "%d", minor_id);
minor->debugfs_symlink = debugfs_create_symlink(name, root,
dev->unique);
/* TODO: Only for compatibility with drivers */
minor->debugfs_root = dev->debugfs_root;
if (dev->driver->debugfs_init && dev->render != minor)
dev->driver->debugfs_init(minor);
return 0;
}
void drm_debugfs_unregister(struct drm_minor *minor)
{
debugfs_remove(minor->debugfs_symlink);
minor->debugfs_symlink = NULL;
}
/**
@ -381,9 +366,8 @@ void drm_debugfs_add_file(struct drm_device *dev, const char *name,
entry->file.data = data;
entry->dev = dev;
mutex_lock(&dev->debugfs_mutex);
list_add(&entry->list, &dev->debugfs_list);
mutex_unlock(&dev->debugfs_mutex);
debugfs_create_file(name, 0444, dev->debugfs_root, entry,
&drm_debugfs_entry_fops);
}
EXPORT_SYMBOL(drm_debugfs_add_file);
@ -540,13 +524,13 @@ static const struct file_operations drm_connector_fops = {
void drm_debugfs_connector_add(struct drm_connector *connector)
{
struct drm_minor *minor = connector->dev->primary;
struct drm_device *dev = connector->dev;
struct dentry *root;
if (!minor->debugfs_root)
if (!dev->debugfs_root)
return;
root = debugfs_create_dir(connector->name, minor->debugfs_root);
root = debugfs_create_dir(connector->name, dev->debugfs_root);
connector->debugfs_entry = root;
/* force */
@ -581,7 +565,7 @@ void drm_debugfs_connector_remove(struct drm_connector *connector)
void drm_debugfs_crtc_add(struct drm_crtc *crtc)
{
struct drm_minor *minor = crtc->dev->primary;
struct drm_device *dev = crtc->dev;
struct dentry *root;
char *name;
@ -589,7 +573,7 @@ void drm_debugfs_crtc_add(struct drm_crtc *crtc)
if (!name)
return;
root = debugfs_create_dir(name, minor->debugfs_root);
root = debugfs_create_dir(name, dev->debugfs_root);
kfree(name);
crtc->debugfs_entry = root;

View File

@ -172,10 +172,9 @@ static int drm_minor_register(struct drm_device *dev, enum drm_minor_type type)
if (!minor)
return 0;
if (minor->type == DRM_MINOR_ACCEL) {
accel_debugfs_init(minor, minor->index);
} else {
ret = drm_debugfs_init(minor, minor->index, drm_debugfs_root);
if (minor->type != DRM_MINOR_ACCEL) {
ret = drm_debugfs_register(minor, minor->index,
drm_debugfs_root);
if (ret) {
DRM_ERROR("DRM: Failed to initialize /sys/kernel/debug/dri.\n");
goto err_debugfs;
@ -199,7 +198,7 @@ static int drm_minor_register(struct drm_device *dev, enum drm_minor_type type)
return 0;
err_debugfs:
drm_debugfs_cleanup(minor);
drm_debugfs_unregister(minor);
return ret;
}
@ -223,7 +222,7 @@ static void drm_minor_unregister(struct drm_device *dev, enum drm_minor_type typ
device_del(minor->kdev);
dev_set_drvdata(minor->kdev, NULL); /* safety belt */
drm_debugfs_cleanup(minor);
drm_debugfs_unregister(minor);
}
/*
@ -598,7 +597,6 @@ static void drm_dev_init_release(struct drm_device *dev, void *res)
mutex_destroy(&dev->clientlist_mutex);
mutex_destroy(&dev->filelist_mutex);
mutex_destroy(&dev->struct_mutex);
mutex_destroy(&dev->debugfs_mutex);
drm_legacy_destroy_members(dev);
}
@ -639,14 +637,12 @@ static int drm_dev_init(struct drm_device *dev,
INIT_LIST_HEAD(&dev->filelist_internal);
INIT_LIST_HEAD(&dev->clientlist);
INIT_LIST_HEAD(&dev->vblank_event_list);
INIT_LIST_HEAD(&dev->debugfs_list);
spin_lock_init(&dev->event_lock);
mutex_init(&dev->struct_mutex);
mutex_init(&dev->filelist_mutex);
mutex_init(&dev->clientlist_mutex);
mutex_init(&dev->master_mutex);
mutex_init(&dev->debugfs_mutex);
ret = drmm_add_action_or_reset(dev, drm_dev_init_release, NULL);
if (ret)
@ -697,6 +693,11 @@ static int drm_dev_init(struct drm_device *dev,
goto err;
}
if (drm_core_check_feature(dev, DRIVER_COMPUTE_ACCEL))
accel_debugfs_init(dev);
else
drm_debugfs_dev_init(dev, drm_debugfs_root);
return 0;
err:
@ -786,6 +787,9 @@ static void drm_dev_release(struct kref *ref)
{
struct drm_device *dev = container_of(ref, struct drm_device, ref);
/* Just in case register/unregister was never called */
drm_debugfs_dev_fini(dev);
if (dev->driver->release)
dev->driver->release(dev);
@ -916,6 +920,11 @@ int drm_dev_register(struct drm_device *dev, unsigned long flags)
if (drm_dev_needs_global_mutex(dev))
mutex_lock(&drm_global_mutex);
if (drm_core_check_feature(dev, DRIVER_COMPUTE_ACCEL))
accel_debugfs_register(dev);
else
drm_debugfs_dev_register(dev);
ret = drm_minor_register(dev, DRM_MINOR_RENDER);
if (ret)
goto err_minors;
@ -1001,6 +1010,7 @@ void drm_dev_unregister(struct drm_device *dev)
drm_minor_unregister(dev, DRM_MINOR_ACCEL);
drm_minor_unregister(dev, DRM_MINOR_PRIMARY);
drm_minor_unregister(dev, DRM_MINOR_RENDER);
drm_debugfs_dev_fini(dev);
}
EXPORT_SYMBOL(drm_dev_unregister);

View File

@ -1222,9 +1222,9 @@ static const struct drm_debugfs_info drm_framebuffer_debugfs_list[] = {
{ "framebuffer", drm_framebuffer_info, 0 },
};
void drm_framebuffer_debugfs_init(struct drm_minor *minor)
void drm_framebuffer_debugfs_init(struct drm_device *dev)
{
drm_debugfs_add_files(minor->dev, drm_framebuffer_debugfs_list,
drm_debugfs_add_files(dev, drm_framebuffer_debugfs_list,
ARRAY_SIZE(drm_framebuffer_debugfs_list));
}
#endif

View File

@ -180,27 +180,32 @@ void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map);
/* drm_debugfs.c drm_debugfs_crc.c */
#if defined(CONFIG_DEBUG_FS)
int drm_debugfs_init(struct drm_minor *minor, int minor_id,
struct dentry *root);
void drm_debugfs_cleanup(struct drm_minor *minor);
void drm_debugfs_late_register(struct drm_device *dev);
void drm_debugfs_dev_fini(struct drm_device *dev);
void drm_debugfs_dev_register(struct drm_device *dev);
int drm_debugfs_register(struct drm_minor *minor, int minor_id,
struct dentry *root);
void drm_debugfs_unregister(struct drm_minor *minor);
void drm_debugfs_connector_add(struct drm_connector *connector);
void drm_debugfs_connector_remove(struct drm_connector *connector);
void drm_debugfs_crtc_add(struct drm_crtc *crtc);
void drm_debugfs_crtc_remove(struct drm_crtc *crtc);
void drm_debugfs_crtc_crc_add(struct drm_crtc *crtc);
#else
static inline int drm_debugfs_init(struct drm_minor *minor, int minor_id,
struct dentry *root)
static inline void drm_debugfs_dev_fini(struct drm_device *dev)
{
}
static inline void drm_debugfs_dev_register(struct drm_device *dev)
{
}
static inline int drm_debugfs_register(struct drm_minor *minor, int minor_id,
struct dentry *root)
{
return 0;
}
static inline void drm_debugfs_cleanup(struct drm_minor *minor)
{
}
static inline void drm_debugfs_late_register(struct drm_device *dev)
static inline void drm_debugfs_unregister(struct drm_minor *minor)
{
}
@ -259,4 +264,4 @@ int drm_syncobj_query_ioctl(struct drm_device *dev, void *data,
/* drm_framebuffer.c */
void drm_framebuffer_print_info(struct drm_printer *p, unsigned int indent,
const struct drm_framebuffer *fb);
void drm_framebuffer_debugfs_init(struct drm_minor *minor);
void drm_framebuffer_debugfs_init(struct drm_device *dev);

View File

@ -54,8 +54,6 @@ int drm_modeset_register_all(struct drm_device *dev)
if (ret)
goto err_connector;
drm_debugfs_late_register(dev);
return 0;
err_connector:

View File

@ -81,7 +81,6 @@ extern void gma_encoder_destroy(struct drm_encoder *encoder);
/* Common clock related functions */
extern const struct gma_limit_t *gma_limit(struct drm_crtc *crtc, int refclk);
extern void gma_clock(int refclk, struct gma_clock_t *clock);
extern bool gma_pll_is_valid(struct drm_crtc *crtc,
const struct gma_limit_t *limit,
struct gma_clock_t *clock);

View File

@ -161,14 +161,6 @@
#define PSB_NUM_VBLANKS 2
#define PSB_2D_SIZE (256*1024*1024)
#define PSB_MAX_RELOC_PAGES 1024
#define PSB_LOW_REG_OFFS 0x0204
#define PSB_HIGH_REG_OFFS 0x0600
#define PSB_NUM_VBLANKS 2
#define PSB_WATCHDOG_DELAY (HZ * 2)
#define PSB_LID_DELAY (HZ / 10)
@ -424,6 +416,7 @@ struct drm_psb_private {
uint32_t pipestat[PSB_NUM_PIPE];
spinlock_t irqmask_lock;
bool irq_enabled;
/* Power */
bool pm_initialized;

View File

@ -186,19 +186,13 @@ extern bool psb_intel_ddc_probe(struct i2c_adapter *adapter);
extern void psb_intel_crtc_init(struct drm_device *dev, int pipe,
struct psb_intel_mode_device *mode_dev);
extern void psb_intel_crt_init(struct drm_device *dev);
extern bool psb_intel_sdvo_init(struct drm_device *dev, int output_device);
extern void psb_intel_dvo_init(struct drm_device *dev);
extern void psb_intel_tv_init(struct drm_device *dev);
extern void psb_intel_lvds_init(struct drm_device *dev,
struct psb_intel_mode_device *mode_dev);
extern void psb_intel_lvds_set_brightness(struct drm_device *dev, int level);
extern void oaktrail_lvds_init(struct drm_device *dev,
struct psb_intel_mode_device *mode_dev);
extern void oaktrail_wait_for_INTR_PKT_SENT(struct drm_device *dev);
struct gma_i2c_chan *oaktrail_lvds_i2c_init(struct drm_device *dev);
extern void mid_dsi_init(struct drm_device *dev,
struct psb_intel_mode_device *mode_dev, int dsi_num);
extern struct drm_encoder *gma_best_encoder(struct drm_connector *connector);
extern void gma_connector_attach_encoder(struct gma_connector *connector,
@ -214,11 +208,6 @@ extern struct drm_display_mode *psb_intel_crtc_mode_get(struct drm_device *dev,
struct drm_crtc *crtc);
extern struct drm_crtc *psb_intel_get_crtc_from_pipe(struct drm_device *dev,
int pipe);
extern struct drm_connector *psb_intel_sdvo_find(struct drm_device *dev,
int sdvoB);
extern int intelfb_probe(struct drm_device *dev);
extern int intelfb_remove(struct drm_device *dev,
struct drm_framebuffer *fb);
extern bool psb_intel_lvds_mode_fixup(struct drm_encoder *encoder,
const struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode);
@ -242,9 +231,6 @@ extern void cdv_intel_dp_set_m_n(struct drm_crtc *crtc,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode);
extern void psb_intel_attach_force_audio_property(struct drm_connector *connector);
extern void psb_intel_attach_broadcast_rgb_property(struct drm_connector *connector);
extern int cdv_sb_read(struct drm_device *dev, u32 reg, u32 *val);
extern int cdv_sb_write(struct drm_device *dev, u32 reg, u32 val);
extern void cdv_sb_reset(struct drm_device *dev);

View File

@ -327,6 +327,8 @@ int gma_irq_install(struct drm_device *dev)
gma_irq_postinstall(dev);
dev_priv->irq_enabled = true;
return 0;
}
@ -337,6 +339,9 @@ void gma_irq_uninstall(struct drm_device *dev)
unsigned long irqflags;
unsigned int i;
if (!dev_priv->irq_enabled)
return;
spin_lock_irqsave(&dev_priv->irqmask_lock, irqflags);
if (dev_priv->ops->hotplug_enable)

View File

@ -557,12 +557,8 @@ static void intel_mst_disable_dp(struct intel_atomic_state *state,
struct intel_dp *intel_dp = &dig_port->dp;
struct intel_connector *connector =
to_intel_connector(old_conn_state->connector);
struct drm_dp_mst_topology_state *old_mst_state =
drm_atomic_get_old_mst_topology_state(&state->base, &intel_dp->mst_mgr);
struct drm_dp_mst_topology_state *new_mst_state =
drm_atomic_get_new_mst_topology_state(&state->base, &intel_dp->mst_mgr);
const struct drm_dp_mst_atomic_payload *old_payload =
drm_atomic_get_mst_payload_state(old_mst_state, connector->port);
struct drm_dp_mst_atomic_payload *new_payload =
drm_atomic_get_mst_payload_state(new_mst_state, connector->port);
struct drm_i915_private *i915 = to_i915(connector->base.dev);
@ -572,8 +568,7 @@ static void intel_mst_disable_dp(struct intel_atomic_state *state,
intel_hdcp_disable(intel_mst->connector);
drm_dp_remove_payload(&intel_dp->mst_mgr, new_mst_state,
old_payload, new_payload);
drm_dp_remove_payload_part1(&intel_dp->mst_mgr, new_mst_state, new_payload);
intel_audio_codec_disable(encoder, old_crtc_state, old_conn_state);
}
@ -588,6 +583,14 @@ static void intel_mst_post_disable_dp(struct intel_atomic_state *state,
struct intel_dp *intel_dp = &dig_port->dp;
struct intel_connector *connector =
to_intel_connector(old_conn_state->connector);
struct drm_dp_mst_topology_state *old_mst_state =
drm_atomic_get_old_mst_topology_state(&state->base, &intel_dp->mst_mgr);
struct drm_dp_mst_topology_state *new_mst_state =
drm_atomic_get_new_mst_topology_state(&state->base, &intel_dp->mst_mgr);
const struct drm_dp_mst_atomic_payload *old_payload =
drm_atomic_get_mst_payload_state(old_mst_state, connector->port);
struct drm_dp_mst_atomic_payload *new_payload =
drm_atomic_get_mst_payload_state(new_mst_state, connector->port);
struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
bool last_mst_stream;
@ -608,6 +611,9 @@ static void intel_mst_post_disable_dp(struct intel_atomic_state *state,
wait_for_act_sent(encoder, old_crtc_state);
drm_dp_remove_payload_part2(&intel_dp->mst_mgr, new_mst_state,
old_payload, new_payload);
intel_ddi_disable_transcoder_func(old_crtc_state);
if (DISPLAY_VER(dev_priv) >= 9)

View File

@ -255,19 +255,17 @@ static int dw_hdmi_imx_probe(struct platform_device *pdev)
return ret;
}
static int dw_hdmi_imx_remove(struct platform_device *pdev)
static void dw_hdmi_imx_remove(struct platform_device *pdev)
{
struct imx_hdmi *hdmi = platform_get_drvdata(pdev);
component_del(&pdev->dev, &dw_hdmi_imx_ops);
dw_hdmi_remove(hdmi->hdmi);
return 0;
}
static struct platform_driver dw_hdmi_imx_platform_driver = {
.probe = dw_hdmi_imx_probe,
.remove = dw_hdmi_imx_remove,
.remove_new = dw_hdmi_imx_remove,
.driver = {
.name = "dwhdmi-imx",
.of_match_table = dw_hdmi_imx_dt_ids,

View File

@ -292,10 +292,9 @@ static int imx_drm_platform_probe(struct platform_device *pdev)
return ret;
}
static int imx_drm_platform_remove(struct platform_device *pdev)
static void imx_drm_platform_remove(struct platform_device *pdev)
{
component_master_del(&pdev->dev, &imx_drm_ops);
return 0;
}
#ifdef CONFIG_PM_SLEEP
@ -324,7 +323,7 @@ MODULE_DEVICE_TABLE(of, imx_drm_dt_ids);
static struct platform_driver imx_drm_pdrv = {
.probe = imx_drm_platform_probe,
.remove = imx_drm_platform_remove,
.remove_new = imx_drm_platform_remove,
.driver = {
.name = "imx-drm",
.pm = &imx_drm_pm_ops,

View File

@ -737,7 +737,7 @@ free_child:
return ret;
}
static int imx_ldb_remove(struct platform_device *pdev)
static void imx_ldb_remove(struct platform_device *pdev)
{
struct imx_ldb *imx_ldb = platform_get_drvdata(pdev);
int i;
@ -750,12 +750,11 @@ static int imx_ldb_remove(struct platform_device *pdev)
}
component_del(&pdev->dev, &imx_ldb_ops);
return 0;
}
static struct platform_driver imx_ldb_driver = {
.probe = imx_ldb_probe,
.remove = imx_ldb_remove,
.remove_new = imx_ldb_remove,
.driver = {
.of_match_table = imx_ldb_dt_ids,
.name = DRIVER_NAME,

View File

@ -645,10 +645,9 @@ static int imx_tve_probe(struct platform_device *pdev)
return component_add(dev, &imx_tve_ops);
}
static int imx_tve_remove(struct platform_device *pdev)
static void imx_tve_remove(struct platform_device *pdev)
{
component_del(&pdev->dev, &imx_tve_ops);
return 0;
}
static const struct of_device_id imx_tve_dt_ids[] = {
@ -659,7 +658,7 @@ MODULE_DEVICE_TABLE(of, imx_tve_dt_ids);
static struct platform_driver imx_tve_driver = {
.probe = imx_tve_probe,
.remove = imx_tve_remove,
.remove_new = imx_tve_remove,
.driver = {
.of_match_table = imx_tve_dt_ids,
.name = "imx-tve",

View File

@ -441,10 +441,9 @@ static int ipu_drm_probe(struct platform_device *pdev)
return component_add(dev, &ipu_crtc_ops);
}
static int ipu_drm_remove(struct platform_device *pdev)
static void ipu_drm_remove(struct platform_device *pdev)
{
component_del(&pdev->dev, &ipu_crtc_ops);
return 0;
}
struct platform_driver ipu_drm_driver = {
@ -452,5 +451,5 @@ struct platform_driver ipu_drm_driver = {
.name = "imx-ipuv3-crtc",
},
.probe = ipu_drm_probe,
.remove = ipu_drm_remove,
.remove_new = ipu_drm_remove,
};

View File

@ -353,11 +353,9 @@ static int imx_pd_probe(struct platform_device *pdev)
return component_add(dev, &imx_pd_ops);
}
static int imx_pd_remove(struct platform_device *pdev)
static void imx_pd_remove(struct platform_device *pdev)
{
component_del(&pdev->dev, &imx_pd_ops);
return 0;
}
static const struct of_device_id imx_pd_dt_ids[] = {
@ -368,7 +366,7 @@ MODULE_DEVICE_TABLE(of, imx_pd_dt_ids);
static struct platform_driver imx_pd_driver = {
.probe = imx_pd_probe,
.remove = imx_pd_remove,
.remove_new = imx_pd_remove,
.driver = {
.of_match_table = imx_pd_dt_ids,
.name = "imx-parallel-display",

View File

@ -1449,7 +1449,7 @@ static int ingenic_drm_probe(struct platform_device *pdev)
return component_master_add_with_match(dev, &ingenic_master_ops, match);
}
static int ingenic_drm_remove(struct platform_device *pdev)
static void ingenic_drm_remove(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
@ -1457,8 +1457,6 @@ static int ingenic_drm_remove(struct platform_device *pdev)
ingenic_drm_unbind(dev);
else
component_master_del(dev, &ingenic_master_ops);
return 0;
}
static int ingenic_drm_suspend(struct device *dev)
@ -1611,7 +1609,7 @@ static struct platform_driver ingenic_drm_driver = {
.of_match_table = of_match_ptr(ingenic_drm_of_match),
},
.probe = ingenic_drm_probe,
.remove = ingenic_drm_remove,
.remove_new = ingenic_drm_remove,
};
static int ingenic_drm_init(void)

View File

@ -922,10 +922,9 @@ static int ingenic_ipu_probe(struct platform_device *pdev)
return component_add(&pdev->dev, &ingenic_ipu_ops);
}
static int ingenic_ipu_remove(struct platform_device *pdev)
static void ingenic_ipu_remove(struct platform_device *pdev)
{
component_del(&pdev->dev, &ingenic_ipu_ops);
return 0;
}
static const u32 jz4725b_ipu_formats[] = {
@ -992,7 +991,7 @@ static struct platform_driver ingenic_ipu_driver = {
.of_match_table = ingenic_ipu_of_match,
},
.probe = ingenic_ipu_probe,
.remove = ingenic_ipu_remove,
.remove_new = ingenic_ipu_remove,
};
struct platform_driver *ingenic_ipu_driver_ptr = &ingenic_ipu_driver;

View File

@ -120,12 +120,14 @@ static int lsdc_pixel_pll_setup(struct lsdc_pixpll * const this)
struct lsdc_pixpll_parms *pparms;
this->mmio = ioremap(this->reg_base, this->reg_size);
if (IS_ERR_OR_NULL(this->mmio))
if (!this->mmio)
return -ENOMEM;
pparms = kzalloc(sizeof(*pparms), GFP_KERNEL);
if (IS_ERR_OR_NULL(pparms))
if (!pparms) {
iounmap(this->mmio);
return -ENOMEM;
}
pparms->ref_clock = LSDC_PLL_REF_CLK_KHZ;

View File

@ -751,10 +751,9 @@ static int adreno_probe(struct platform_device *pdev)
return 0;
}
static int adreno_remove(struct platform_device *pdev)
static void adreno_remove(struct platform_device *pdev)
{
component_del(&pdev->dev, &a3xx_ops);
return 0;
}
static void adreno_shutdown(struct platform_device *pdev)
@ -869,7 +868,7 @@ static const struct dev_pm_ops adreno_pm_ops = {
static struct platform_driver adreno_driver = {
.probe = adreno_probe,
.remove = adreno_remove,
.remove_new = adreno_remove,
.shutdown = adreno_shutdown,
.driver = {
.name = "adreno",

View File

@ -1302,11 +1302,9 @@ static int dpu_dev_probe(struct platform_device *pdev)
return msm_drv_probe(&pdev->dev, dpu_kms_init);
}
static int dpu_dev_remove(struct platform_device *pdev)
static void dpu_dev_remove(struct platform_device *pdev)
{
component_master_del(&pdev->dev, &msm_drm_ops);
return 0;
}
static int __maybe_unused dpu_runtime_suspend(struct device *dev)
@ -1382,7 +1380,7 @@ MODULE_DEVICE_TABLE(of, dpu_dt_match);
static struct platform_driver dpu_driver = {
.probe = dpu_dev_probe,
.remove = dpu_dev_remove,
.remove_new = dpu_dev_remove,
.shutdown = msm_drv_shutdown,
.driver = {
.name = "msm_dpu",

View File

@ -560,11 +560,9 @@ static int mdp4_probe(struct platform_device *pdev)
return msm_drv_probe(&pdev->dev, mdp4_kms_init);
}
static int mdp4_remove(struct platform_device *pdev)
static void mdp4_remove(struct platform_device *pdev)
{
component_master_del(&pdev->dev, &msm_drm_ops);
return 0;
}
static const struct of_device_id mdp4_dt_match[] = {
@ -575,7 +573,7 @@ MODULE_DEVICE_TABLE(of, mdp4_dt_match);
static struct platform_driver mdp4_platform_driver = {
.probe = mdp4_probe,
.remove = mdp4_remove,
.remove_new = mdp4_remove,
.shutdown = msm_drv_shutdown,
.driver = {
.name = "mdp4",

View File

@ -942,11 +942,10 @@ static int mdp5_dev_probe(struct platform_device *pdev)
return msm_drv_probe(&pdev->dev, mdp5_kms_init);
}
static int mdp5_dev_remove(struct platform_device *pdev)
static void mdp5_dev_remove(struct platform_device *pdev)
{
DBG("");
component_master_del(&pdev->dev, &msm_drm_ops);
return 0;
}
static __maybe_unused int mdp5_runtime_suspend(struct device *dev)
@ -987,7 +986,7 @@ MODULE_DEVICE_TABLE(of, mdp5_dt_match);
static struct platform_driver mdp5_driver = {
.probe = mdp5_dev_probe,
.remove = mdp5_dev_remove,
.remove_new = mdp5_dev_remove,
.shutdown = msm_drv_shutdown,
.driver = {
.name = "msm_mdp",

View File

@ -1296,7 +1296,7 @@ static int dp_display_probe(struct platform_device *pdev)
return rc;
}
static int dp_display_remove(struct platform_device *pdev)
static void dp_display_remove(struct platform_device *pdev)
{
struct dp_display_private *dp = dev_get_dp_display_private(&pdev->dev);
@ -1304,8 +1304,6 @@ static int dp_display_remove(struct platform_device *pdev)
dp_display_deinit_sub_modules(dp);
platform_set_drvdata(pdev, NULL);
return 0;
}
static int dp_pm_resume(struct device *dev)
@ -1415,7 +1413,7 @@ static const struct dev_pm_ops dp_pm_ops = {
static struct platform_driver dp_display_driver = {
.probe = dp_display_probe,
.remove = dp_display_remove,
.remove_new = dp_display_remove,
.driver = {
.name = "msm-dp-display",
.of_match_table = dp_dt_match,

View File

@ -161,14 +161,12 @@ static int dsi_dev_probe(struct platform_device *pdev)
return 0;
}
static int dsi_dev_remove(struct platform_device *pdev)
static void dsi_dev_remove(struct platform_device *pdev)
{
struct msm_dsi *msm_dsi = platform_get_drvdata(pdev);
DBG("");
dsi_destroy(msm_dsi);
return 0;
}
static const struct of_device_id dt_match[] = {
@ -187,7 +185,7 @@ static const struct dev_pm_ops dsi_pm_ops = {
static struct platform_driver dsi_driver = {
.probe = dsi_dev_probe,
.remove = dsi_dev_remove,
.remove_new = dsi_dev_remove,
.driver = {
.name = "msm_dsi",
.of_match_table = dt_match,

View File

@ -551,15 +551,13 @@ err_put_phy:
return ret;
}
static int msm_hdmi_dev_remove(struct platform_device *pdev)
static void msm_hdmi_dev_remove(struct platform_device *pdev)
{
struct hdmi *hdmi = dev_get_drvdata(&pdev->dev);
component_del(&pdev->dev, &msm_hdmi_ops);
msm_hdmi_put_phy(hdmi);
return 0;
}
static const struct of_device_id msm_hdmi_dt_match[] = {
@ -574,7 +572,7 @@ static const struct of_device_id msm_hdmi_dt_match[] = {
static struct platform_driver msm_hdmi_driver = {
.probe = msm_hdmi_dev_probe,
.remove = msm_hdmi_dev_remove,
.remove_new = msm_hdmi_dev_remove,
.driver = {
.name = "hdmi_msm",
.of_match_table = msm_hdmi_dt_match,

View File

@ -177,11 +177,9 @@ static int msm_hdmi_phy_probe(struct platform_device *pdev)
return 0;
}
static int msm_hdmi_phy_remove(struct platform_device *pdev)
static void msm_hdmi_phy_remove(struct platform_device *pdev)
{
pm_runtime_disable(&pdev->dev);
return 0;
}
static const struct of_device_id msm_hdmi_phy_dt_match[] = {
@ -200,7 +198,7 @@ static const struct of_device_id msm_hdmi_phy_dt_match[] = {
static struct platform_driver msm_hdmi_phy_platform_driver = {
.probe = msm_hdmi_phy_probe,
.remove = msm_hdmi_phy_remove,
.remove_new = msm_hdmi_phy_remove,
.driver = {
.name = "msm_hdmi_phy",
.of_match_table = msm_hdmi_phy_dt_match,

View File

@ -1278,11 +1278,9 @@ static int msm_pdev_probe(struct platform_device *pdev)
return msm_drv_probe(&pdev->dev, NULL);
}
static int msm_pdev_remove(struct platform_device *pdev)
static void msm_pdev_remove(struct platform_device *pdev)
{
component_master_del(&pdev->dev, &msm_drm_ops);
return 0;
}
void msm_drv_shutdown(struct platform_device *pdev)
@ -1303,7 +1301,7 @@ void msm_drv_shutdown(struct platform_device *pdev)
static struct platform_driver msm_platform_driver = {
.probe = msm_pdev_probe,
.remove = msm_pdev_remove,
.remove_new = msm_pdev_remove,
.shutdown = msm_drv_shutdown,
.driver = {
.name = "msm",

View File

@ -497,15 +497,13 @@ static int mdss_probe(struct platform_device *pdev)
return 0;
}
static int mdss_remove(struct platform_device *pdev)
static void mdss_remove(struct platform_device *pdev)
{
struct msm_mdss *mdss = platform_get_drvdata(pdev);
of_platform_depopulate(&pdev->dev);
msm_mdss_destroy(mdss);
return 0;
}
static const struct msm_mdss_data msm8998_data = {
@ -629,7 +627,7 @@ MODULE_DEVICE_TABLE(of, mdss_dt_match);
static struct platform_driver mdss_platform_driver = {
.probe = mdss_probe,
.remove = mdss_remove,
.remove_new = mdss_remove,
.driver = {
.name = "msm-mdss",
.of_match_table = mdss_dt_match,

View File

@ -882,21 +882,26 @@ struct nouveau_encoder *nv50_real_outp(struct drm_encoder *encoder)
static void
nv50_msto_cleanup(struct drm_atomic_state *state,
struct drm_dp_mst_topology_state *mst_state,
struct drm_dp_mst_topology_state *new_mst_state,
struct drm_dp_mst_topology_mgr *mgr,
struct nv50_msto *msto)
{
struct nouveau_drm *drm = nouveau_drm(msto->encoder.dev);
struct drm_dp_mst_atomic_payload *payload =
drm_atomic_get_mst_payload_state(mst_state, msto->mstc->port);
struct drm_dp_mst_atomic_payload *new_payload =
drm_atomic_get_mst_payload_state(new_mst_state, msto->mstc->port);
struct drm_dp_mst_topology_state *old_mst_state =
drm_atomic_get_old_mst_topology_state(state, mgr);
const struct drm_dp_mst_atomic_payload *old_payload =
drm_atomic_get_mst_payload_state(old_mst_state, msto->mstc->port);
NV_ATOMIC(drm, "%s: msto cleanup\n", msto->encoder.name);
if (msto->disabled) {
msto->mstc = NULL;
msto->disabled = false;
drm_dp_remove_payload_part2(mgr, new_mst_state, old_payload, new_payload);
} else if (msto->enabled) {
drm_dp_add_payload_part2(mgr, state, payload);
drm_dp_add_payload_part2(mgr, state, new_payload);
msto->enabled = false;
}
}
@ -910,19 +915,15 @@ nv50_msto_prepare(struct drm_atomic_state *state,
struct nouveau_drm *drm = nouveau_drm(msto->encoder.dev);
struct nv50_mstc *mstc = msto->mstc;
struct nv50_mstm *mstm = mstc->mstm;
struct drm_dp_mst_topology_state *old_mst_state;
struct drm_dp_mst_atomic_payload *payload, *old_payload;
struct drm_dp_mst_atomic_payload *payload;
NV_ATOMIC(drm, "%s: msto prepare\n", msto->encoder.name);
old_mst_state = drm_atomic_get_old_mst_topology_state(state, mgr);
payload = drm_atomic_get_mst_payload_state(mst_state, mstc->port);
old_payload = drm_atomic_get_mst_payload_state(old_mst_state, mstc->port);
// TODO: Figure out if we want to do a better job of handling VCPI allocation failures here?
if (msto->disabled) {
drm_dp_remove_payload(mgr, mst_state, old_payload, payload);
drm_dp_remove_payload_part1(mgr, mst_state, payload);
nvif_outp_dp_mst_vcpi(&mstm->outp->outp, msto->head->base.index, 0, 0, 0, 0);
} else {

View File

@ -189,21 +189,12 @@ u_free(void *addr)
static inline void *
u_memcpya(uint64_t user, unsigned int nmemb, unsigned int size)
{
void *mem;
void __user *userptr = (void __force __user *)(uintptr_t)user;
void __user *userptr = u64_to_user_ptr(user);
size_t bytes;
size *= nmemb;
mem = kvmalloc(size, GFP_KERNEL);
if (!mem)
return ERR_PTR(-ENOMEM);
if (copy_from_user(mem, userptr, size)) {
u_free(mem);
return ERR_PTR(-EFAULT);
}
return mem;
if (unlikely(check_mul_overflow(nmemb, size, &bytes)))
return NULL;
return vmemdup_user(userptr, bytes);
}
#include <nvif/object.h>

View File

@ -244,6 +244,17 @@ config DRM_PANEL_JDI_LT070ME05000
The panel has a 1200(RGB)×1920 (WUXGA) resolution and uses
24 bit per pixel.
config DRM_PANEL_JDI_LPM102A188A
tristate "JDI LPM102A188A DSI panel"
depends on OF && GPIOLIB
depends on DRM_MIPI_DSI
depends on BACKLIGHT_CLASS_DEVICE
help
Say Y here if you want to enable support for JDI LPM102A188A DSI
command mode panel as found in Google Pixel C devices.
The panel has a 2560×1800 resolution. It provides a MIPI DSI interface
to the host.
config DRM_PANEL_JDI_R63452
tristate "JDI R63452 Full HD DSI panel"
depends on OF

View File

@ -22,6 +22,7 @@ obj-$(CONFIG_DRM_PANEL_INNOLUX_EJ030NA) += panel-innolux-ej030na.o
obj-$(CONFIG_DRM_PANEL_INNOLUX_P079ZCA) += panel-innolux-p079zca.o
obj-$(CONFIG_DRM_PANEL_JADARD_JD9365DA_H3) += panel-jadard-jd9365da-h3.o
obj-$(CONFIG_DRM_PANEL_JDI_LT070ME05000) += panel-jdi-lt070me05000.o
obj-$(CONFIG_DRM_PANEL_JDI_LPM102A188A) += panel-jdi-lpm102a188a.o
obj-$(CONFIG_DRM_PANEL_JDI_R63452) += panel-jdi-fhd-r63452.o
obj-$(CONFIG_DRM_PANEL_KHADAS_TS050) += panel-khadas-ts050.o
obj-$(CONFIG_DRM_PANEL_KINGDISPLAY_KD097D04) += panel-kingdisplay-kd097d04.o

View File

@ -0,0 +1,551 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2014 Google, Inc.
*
* Copyright (C) 2022 Diogo Ivo <diogo.ivo@tecnico.ulisboa.pt>
*
* Adapted from the downstream Pixel C driver written by Sean Paul
*/
#include <linux/backlight.h>
#include <linux/delay.h>
#include <linux/gpio/consumer.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/regulator/consumer.h>
#include <video/mipi_display.h>
#include <drm/drm_crtc.h>
#include <drm/drm_mipi_dsi.h>
#include <drm/drm_panel.h>
#define MCS_CMD_ACS_PROT 0xB0
#define MCS_CMD_ACS_PROT_OFF (0 << 0)
#define MCS_PWR_CTRL_FUNC 0xD0
#define MCS_PWR_CTRL_PARAM1_DEFAULT (2 << 0)
#define MCS_PWR_CTRL_PARAM1_VGH_210_DIV (1 << 4)
#define MCS_PWR_CTRL_PARAM1_VGH_240_DIV (2 << 4)
#define MCS_PWR_CTRL_PARAM1_VGH_280_DIV (3 << 4)
#define MCS_PWR_CTRL_PARAM1_VGH_330_DIV (4 << 4)
#define MCS_PWR_CTRL_PARAM1_VGH_410_DIV (5 << 4)
#define MCS_PWR_CTRL_PARAM2_DEFAULT (9 << 4)
#define MCS_PWR_CTRL_PARAM2_VGL_210_DIV (1 << 0)
#define MCS_PWR_CTRL_PARAM2_VGL_240_DIV (2 << 0)
#define MCS_PWR_CTRL_PARAM2_VGL_280_DIV (3 << 0)
#define MCS_PWR_CTRL_PARAM2_VGL_330_DIV (4 << 0)
#define MCS_PWR_CTRL_PARAM2_VGL_410_DIV (5 << 0)
struct jdi_panel {
struct drm_panel base;
struct mipi_dsi_device *link1;
struct mipi_dsi_device *link2;
struct regulator *supply;
struct regulator *ddi_supply;
struct backlight_device *backlight;
struct gpio_desc *enable_gpio;
struct gpio_desc *reset_gpio;
const struct drm_display_mode *mode;
};
static inline struct jdi_panel *to_panel_jdi(struct drm_panel *panel)
{
return container_of(panel, struct jdi_panel, base);
}
static void jdi_wait_frames(struct jdi_panel *jdi, unsigned int frames)
{
unsigned int refresh = drm_mode_vrefresh(jdi->mode);
if (WARN_ON(frames > refresh))
return;
msleep(1000 / (refresh / frames));
}
static int jdi_panel_disable(struct drm_panel *panel)
{
struct jdi_panel *jdi = to_panel_jdi(panel);
backlight_disable(jdi->backlight);
jdi_wait_frames(jdi, 2);
return 0;
}
static int jdi_panel_unprepare(struct drm_panel *panel)
{
struct jdi_panel *jdi = to_panel_jdi(panel);
int ret;
ret = mipi_dsi_dcs_set_display_off(jdi->link1);
if (ret < 0)
dev_err(panel->dev, "failed to set display off: %d\n", ret);
ret = mipi_dsi_dcs_set_display_off(jdi->link2);
if (ret < 0)
dev_err(panel->dev, "failed to set display off: %d\n", ret);
/* Specified by JDI @ 50ms, subject to change */
msleep(50);
ret = mipi_dsi_dcs_enter_sleep_mode(jdi->link1);
if (ret < 0)
dev_err(panel->dev, "failed to enter sleep mode: %d\n", ret);
ret = mipi_dsi_dcs_enter_sleep_mode(jdi->link2);
if (ret < 0)
dev_err(panel->dev, "failed to enter sleep mode: %d\n", ret);
/* Specified by JDI @ 150ms, subject to change */
msleep(150);
gpiod_set_value(jdi->reset_gpio, 1);
/* T4 = 1ms */
usleep_range(1000, 3000);
gpiod_set_value(jdi->enable_gpio, 0);
/* T5 = 2ms */
usleep_range(2000, 4000);
regulator_disable(jdi->ddi_supply);
/* T6 = 2ms plus some time to discharge capacitors */
usleep_range(7000, 9000);
regulator_disable(jdi->supply);
/* Specified by JDI @ 20ms, subject to change */
msleep(20);
return ret;
}
static int jdi_setup_symmetrical_split(struct mipi_dsi_device *left,
struct mipi_dsi_device *right,
const struct drm_display_mode *mode)
{
int err;
err = mipi_dsi_dcs_set_column_address(left, 0, mode->hdisplay / 2 - 1);
if (err < 0) {
dev_err(&left->dev, "failed to set column address: %d\n", err);
return err;
}
err = mipi_dsi_dcs_set_column_address(right, 0, mode->hdisplay / 2 - 1);
if (err < 0) {
dev_err(&right->dev, "failed to set column address: %d\n", err);
return err;
}
err = mipi_dsi_dcs_set_page_address(left, 0, mode->vdisplay - 1);
if (err < 0) {
dev_err(&left->dev, "failed to set page address: %d\n", err);
return err;
}
err = mipi_dsi_dcs_set_page_address(right, 0, mode->vdisplay - 1);
if (err < 0) {
dev_err(&right->dev, "failed to set page address: %d\n", err);
return err;
}
return 0;
}
static int jdi_write_dcdc_registers(struct jdi_panel *jdi)
{
/* Clear the manufacturer command access protection */
mipi_dsi_generic_write_seq(jdi->link1, MCS_CMD_ACS_PROT,
MCS_CMD_ACS_PROT_OFF);
mipi_dsi_generic_write_seq(jdi->link2, MCS_CMD_ACS_PROT,
MCS_CMD_ACS_PROT_OFF);
/*
* Change the VGH/VGL divide rations to move the noise generated by the
* TCONN. This should hopefully avoid interaction with the backlight
* controller.
*/
mipi_dsi_generic_write_seq(jdi->link1, MCS_PWR_CTRL_FUNC,
MCS_PWR_CTRL_PARAM1_VGH_330_DIV |
MCS_PWR_CTRL_PARAM1_DEFAULT,
MCS_PWR_CTRL_PARAM2_VGL_410_DIV |
MCS_PWR_CTRL_PARAM2_DEFAULT);
mipi_dsi_generic_write_seq(jdi->link2, MCS_PWR_CTRL_FUNC,
MCS_PWR_CTRL_PARAM1_VGH_330_DIV |
MCS_PWR_CTRL_PARAM1_DEFAULT,
MCS_PWR_CTRL_PARAM2_VGL_410_DIV |
MCS_PWR_CTRL_PARAM2_DEFAULT);
return 0;
}
static int jdi_panel_prepare(struct drm_panel *panel)
{
struct jdi_panel *jdi = to_panel_jdi(panel);
int err;
/* Disable backlight to avoid showing random pixels
* with a conservative delay for it to take effect.
*/
backlight_disable(jdi->backlight);
jdi_wait_frames(jdi, 3);
jdi->link1->mode_flags |= MIPI_DSI_MODE_LPM;
jdi->link2->mode_flags |= MIPI_DSI_MODE_LPM;
err = regulator_enable(jdi->supply);
if (err < 0) {
dev_err(panel->dev, "failed to enable supply: %d\n", err);
return err;
}
/* T1 = 2ms */
usleep_range(2000, 4000);
err = regulator_enable(jdi->ddi_supply);
if (err < 0) {
dev_err(panel->dev, "failed to enable ddi_supply: %d\n", err);
goto supply_off;
}
/* T2 = 1ms */
usleep_range(1000, 3000);
gpiod_set_value(jdi->enable_gpio, 1);
/* T3 = 10ms */
usleep_range(10000, 15000);
gpiod_set_value(jdi->reset_gpio, 0);
/* Specified by JDI @ 3ms, subject to change */
usleep_range(3000, 5000);
/*
* TODO: The device supports both left-right and even-odd split
* configurations, but this driver currently supports only the left-
* right split. To support a different mode a mechanism needs to be
* put in place to communicate the configuration back to the DSI host
* controller.
*/
err = jdi_setup_symmetrical_split(jdi->link1, jdi->link2,
jdi->mode);
if (err < 0) {
dev_err(panel->dev, "failed to set up symmetrical split: %d\n",
err);
goto poweroff;
}
err = mipi_dsi_dcs_set_tear_scanline(jdi->link1,
jdi->mode->vdisplay - 16);
if (err < 0) {
dev_err(panel->dev, "failed to set tear scanline: %d\n", err);
goto poweroff;
}
err = mipi_dsi_dcs_set_tear_scanline(jdi->link2,
jdi->mode->vdisplay - 16);
if (err < 0) {
dev_err(panel->dev, "failed to set tear scanline: %d\n", err);
goto poweroff;
}
err = mipi_dsi_dcs_set_tear_on(jdi->link1,
MIPI_DSI_DCS_TEAR_MODE_VBLANK);
if (err < 0) {
dev_err(panel->dev, "failed to set tear on: %d\n", err);
goto poweroff;
}
err = mipi_dsi_dcs_set_tear_on(jdi->link2,
MIPI_DSI_DCS_TEAR_MODE_VBLANK);
if (err < 0) {
dev_err(panel->dev, "failed to set tear on: %d\n", err);
goto poweroff;
}
err = mipi_dsi_dcs_set_pixel_format(jdi->link1, MIPI_DCS_PIXEL_FMT_24BIT);
if (err < 0) {
dev_err(panel->dev, "failed to set pixel format: %d\n", err);
goto poweroff;
}
err = mipi_dsi_dcs_set_pixel_format(jdi->link2, MIPI_DCS_PIXEL_FMT_24BIT);
if (err < 0) {
dev_err(panel->dev, "failed to set pixel format: %d\n", err);
goto poweroff;
}
err = mipi_dsi_dcs_exit_sleep_mode(jdi->link1);
if (err < 0) {
dev_err(panel->dev, "failed to exit sleep mode: %d\n", err);
goto poweroff;
}
err = mipi_dsi_dcs_exit_sleep_mode(jdi->link2);
if (err < 0) {
dev_err(panel->dev, "failed to exit sleep mode: %d\n", err);
goto poweroff;
}
err = jdi_write_dcdc_registers(jdi);
if (err < 0) {
dev_err(panel->dev, "failed to write dcdc registers: %d\n", err);
goto poweroff;
}
/*
* We need to wait 150ms between mipi_dsi_dcs_exit_sleep_mode() and
* mipi_dsi_dcs_set_display_on().
*/
msleep(150);
err = mipi_dsi_dcs_set_display_on(jdi->link1);
if (err < 0) {
dev_err(panel->dev, "failed to set display on: %d\n", err);
goto poweroff;
}
err = mipi_dsi_dcs_set_display_on(jdi->link2);
if (err < 0) {
dev_err(panel->dev, "failed to set display on: %d\n", err);
goto poweroff;
}
jdi->link1->mode_flags &= ~MIPI_DSI_MODE_LPM;
jdi->link2->mode_flags &= ~MIPI_DSI_MODE_LPM;
return 0;
poweroff:
regulator_disable(jdi->ddi_supply);
/* T6 = 2ms plus some time to discharge capacitors */
usleep_range(7000, 9000);
supply_off:
regulator_disable(jdi->supply);
/* Specified by JDI @ 20ms, subject to change */
msleep(20);
return err;
}
static int jdi_panel_enable(struct drm_panel *panel)
{
struct jdi_panel *jdi = to_panel_jdi(panel);
/*
* Ensure we send image data before turning the backlight
* on, to avoid the display showing random pixels.
*/
jdi_wait_frames(jdi, 3);
backlight_enable(jdi->backlight);
return 0;
}
static const struct drm_display_mode default_mode = {
.clock = (2560 + 80 + 80 + 80) * (1800 + 4 + 4 + 4) * 60 / 1000,
.hdisplay = 2560,
.hsync_start = 2560 + 80,
.hsync_end = 2560 + 80 + 80,
.htotal = 2560 + 80 + 80 + 80,
.vdisplay = 1800,
.vsync_start = 1800 + 4,
.vsync_end = 1800 + 4 + 4,
.vtotal = 1800 + 4 + 4 + 4,
.flags = 0,
};
static int jdi_panel_get_modes(struct drm_panel *panel,
struct drm_connector *connector)
{
struct drm_display_mode *mode;
struct jdi_panel *jdi = to_panel_jdi(panel);
struct device *dev = &jdi->link1->dev;
mode = drm_mode_duplicate(connector->dev, &default_mode);
if (!mode) {
dev_err(dev, "failed to add mode %ux%ux@%u\n",
default_mode.hdisplay, default_mode.vdisplay,
drm_mode_vrefresh(&default_mode));
return -ENOMEM;
}
drm_mode_set_name(mode);
drm_mode_probed_add(connector, mode);
connector->display_info.width_mm = 211;
connector->display_info.height_mm = 148;
connector->display_info.bpc = 8;
return 1;
}
static const struct drm_panel_funcs jdi_panel_funcs = {
.prepare = jdi_panel_prepare,
.enable = jdi_panel_enable,
.disable = jdi_panel_disable,
.unprepare = jdi_panel_unprepare,
.get_modes = jdi_panel_get_modes,
};
static const struct of_device_id jdi_of_match[] = {
{ .compatible = "jdi,lpm102a188a", },
{ }
};
MODULE_DEVICE_TABLE(of, jdi_of_match);
static int jdi_panel_add(struct jdi_panel *jdi)
{
struct device *dev = &jdi->link1->dev;
jdi->mode = &default_mode;
jdi->supply = devm_regulator_get(dev, "power");
if (IS_ERR(jdi->supply))
return dev_err_probe(dev, PTR_ERR(jdi->supply),
"failed to get power regulator\n");
jdi->ddi_supply = devm_regulator_get(dev, "ddi");
if (IS_ERR(jdi->ddi_supply))
return dev_err_probe(dev, PTR_ERR(jdi->ddi_supply),
"failed to get ddi regulator\n");
jdi->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH);
if (IS_ERR(jdi->reset_gpio))
return dev_err_probe(dev, PTR_ERR(jdi->reset_gpio),
"failed to get reset gpio\n");
/* T4 = 1ms */
usleep_range(1000, 3000);
jdi->enable_gpio = devm_gpiod_get(dev, "enable", GPIOD_OUT_LOW);
if (IS_ERR(jdi->enable_gpio))
return dev_err_probe(dev, PTR_ERR(jdi->enable_gpio),
"failed to get enable gpio\n");
/* T5 = 2ms */
usleep_range(2000, 4000);
jdi->backlight = devm_of_find_backlight(dev);
if (IS_ERR(jdi->backlight))
return dev_err_probe(dev, PTR_ERR(jdi->backlight),
"failed to create backlight\n");
drm_panel_init(&jdi->base, &jdi->link1->dev, &jdi_panel_funcs,
DRM_MODE_CONNECTOR_DSI);
drm_panel_add(&jdi->base);
return 0;
}
static void jdi_panel_del(struct jdi_panel *jdi)
{
if (jdi->base.dev)
drm_panel_remove(&jdi->base);
if (jdi->link2)
put_device(&jdi->link2->dev);
}
static int jdi_panel_dsi_probe(struct mipi_dsi_device *dsi)
{
struct mipi_dsi_device *secondary = NULL;
struct jdi_panel *jdi;
struct device_node *np;
int err;
dsi->lanes = 4;
dsi->format = MIPI_DSI_FMT_RGB888;
dsi->mode_flags = 0;
/* Find DSI-LINK1 */
np = of_parse_phandle(dsi->dev.of_node, "link2", 0);
if (np) {
secondary = of_find_mipi_dsi_device_by_node(np);
of_node_put(np);
if (!secondary)
return -EPROBE_DEFER;
}
/* register a panel for only the DSI-LINK1 interface */
if (secondary) {
jdi = devm_kzalloc(&dsi->dev, sizeof(*jdi), GFP_KERNEL);
if (!jdi) {
put_device(&secondary->dev);
return -ENOMEM;
}
mipi_dsi_set_drvdata(dsi, jdi);
jdi->link1 = dsi;
jdi->link2 = secondary;
err = jdi_panel_add(jdi);
if (err < 0) {
put_device(&secondary->dev);
return err;
}
}
err = mipi_dsi_attach(dsi);
if (err < 0) {
if (secondary)
jdi_panel_del(jdi);
return err;
}
return 0;
}
static void jdi_panel_dsi_remove(struct mipi_dsi_device *dsi)
{
struct jdi_panel *jdi = mipi_dsi_get_drvdata(dsi);
int err;
/* only detach from host for the DSI-LINK2 interface */
if (!jdi)
mipi_dsi_detach(dsi);
err = jdi_panel_disable(&jdi->base);
if (err < 0)
dev_err(&dsi->dev, "failed to disable panel: %d\n", err);
err = mipi_dsi_detach(dsi);
if (err < 0)
dev_err(&dsi->dev, "failed to detach from DSI host: %d\n", err);
jdi_panel_del(jdi);
}
static void jdi_panel_dsi_shutdown(struct mipi_dsi_device *dsi)
{
struct jdi_panel *jdi = mipi_dsi_get_drvdata(dsi);
if (!jdi)
return;
jdi_panel_disable(&jdi->base);
}
static struct mipi_dsi_driver jdi_panel_dsi_driver = {
.driver = {
.name = "panel-jdi-lpm102a188a",
.of_match_table = jdi_of_match,
},
.probe = jdi_panel_dsi_probe,
.remove = jdi_panel_dsi_remove,
.shutdown = jdi_panel_dsi_shutdown,
};
module_mipi_dsi_driver(jdi_panel_dsi_driver);
MODULE_AUTHOR("Sean Paul <seanpaul@chromium.org>");
MODULE_AUTHOR("Diogo Ivo <diogo.ivo@tecnico.ulisboa.pt>");
MODULE_DESCRIPTION("DRM Driver for JDI LPM102A188A DSI panel, command mode");
MODULE_LICENSE("GPL");

View File

@ -5,10 +5,6 @@
*
* Copyright (C) 2016 Linaro Ltd
* Author: Sumit Semwal <sumit.semwal@linaro.org>
*
* From internet archives, the panel for Nexus 7 2nd Gen, 2013 model is a
* JDI model LT070ME05000, and its data sheet is at:
* http://panelone.net/en/7-0-inch/JDI_LT070ME05000_7.0_inch-datasheet
*/
#include <linux/backlight.h>

View File

@ -2793,6 +2793,32 @@ static const struct panel_desc mitsubishi_aa070mc01 = {
.bus_flags = DRM_BUS_FLAG_DE_HIGH,
};
static const struct drm_display_mode mitsubishi_aa084xe01_mode = {
.clock = 56234,
.hdisplay = 1024,
.hsync_start = 1024 + 24,
.hsync_end = 1024 + 24 + 63,
.htotal = 1024 + 24 + 63 + 1,
.vdisplay = 768,
.vsync_start = 768 + 3,
.vsync_end = 768 + 3 + 6,
.vtotal = 768 + 3 + 6 + 1,
.flags = DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC,
};
static const struct panel_desc mitsubishi_aa084xe01 = {
.modes = &mitsubishi_aa084xe01_mode,
.num_modes = 1,
.bpc = 8,
.size = {
.width = 1024,
.height = 768,
},
.bus_format = MEDIA_BUS_FMT_RGB565_1X16,
.connector_type = DRM_MODE_CONNECTOR_DPI,
.bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_SAMPLE_NEGEDGE,
};
static const struct display_timing multi_inno_mi0700s4t_6_timing = {
.pixelclock = { 29000000, 33000000, 38000000 },
.hactive = { 800, 800, 800 },
@ -4321,6 +4347,9 @@ static const struct of_device_id platform_of_match[] = {
}, {
.compatible = "mitsubishi,aa070mc01-ca1",
.data = &mitsubishi_aa070mc01,
}, {
.compatible = "mitsubishi,aa084xe01",
.data = &mitsubishi_aa084xe01,
}, {
.compatible = "multi-inno,mi0700s4t-6",
.data = &multi_inno_mi0700s4t_6,

View File

@ -390,8 +390,8 @@ int panfrost_gpu_init(struct panfrost_device *pfdev)
dma_set_max_seg_size(pfdev->dev, UINT_MAX);
irq = platform_get_irq_byname(to_platform_device(pfdev->dev), "gpu");
if (irq <= 0)
return -ENODEV;
if (irq < 0)
return irq;
err = devm_request_irq(pfdev->dev, irq, panfrost_gpu_irq_handler,
IRQF_SHARED, KBUILD_MODNAME "-gpu", pfdev);

View File

@ -810,8 +810,8 @@ int panfrost_job_init(struct panfrost_device *pfdev)
spin_lock_init(&js->job_lock);
js->irq = platform_get_irq_byname(to_platform_device(pfdev->dev), "job");
if (js->irq <= 0)
return -ENODEV;
if (js->irq < 0)
return js->irq;
ret = devm_request_threaded_irq(pfdev->dev, js->irq,
panfrost_job_irq_handler,

View File

@ -755,8 +755,8 @@ int panfrost_mmu_init(struct panfrost_device *pfdev)
int err, irq;
irq = platform_get_irq_byname(to_platform_device(pfdev->dev), "mmu");
if (irq <= 0)
return -ENODEV;
if (irq < 0)
return irq;
err = devm_request_threaded_irq(pfdev->dev, irq,
panfrost_mmu_irq_handler,

View File

@ -172,7 +172,7 @@ static DEFINE_SIMPLE_DEV_PM_OPS(shmob_drm_pm_ops,
* Platform driver
*/
static int shmob_drm_remove(struct platform_device *pdev)
static void shmob_drm_remove(struct platform_device *pdev)
{
struct shmob_drm_device *sdev = platform_get_drvdata(pdev);
struct drm_device *ddev = sdev->ddev;
@ -181,8 +181,6 @@ static int shmob_drm_remove(struct platform_device *pdev)
drm_kms_helper_poll_fini(ddev);
free_irq(sdev->irq, ddev);
drm_dev_put(ddev);
return 0;
}
static int shmob_drm_probe(struct platform_device *pdev)
@ -288,7 +286,7 @@ err_free_drm_dev:
static struct platform_driver shmob_drm_platform_driver = {
.probe = shmob_drm_probe,
.remove = shmob_drm_remove,
.remove_new = shmob_drm_remove,
.driver = {
.name = "shmob-drm",
.pm = pm_sleep_ptr(&shmob_drm_pm_ops),

View File

@ -198,6 +198,11 @@
#define RK3568_DSI1_TURNDISABLE BIT(2)
#define RK3568_DSI1_FORCERXMODE BIT(0)
#define RV1126_GRF_DSIPHY_CON 0x10220
#define RV1126_DSI_FORCETXSTOPMODE (0xf << 4)
#define RV1126_DSI_TURNDISABLE BIT(2)
#define RV1126_DSI_FORCERXMODE BIT(0)
#define HIWORD_UPDATE(val, mask) (val | (mask) << 16)
enum {
@ -1651,6 +1656,18 @@ static const struct rockchip_dw_dsi_chip_data rk3568_chip_data[] = {
{ /* sentinel */ }
};
static const struct rockchip_dw_dsi_chip_data rv1126_chip_data[] = {
{
.reg = 0xffb30000,
.lanecfg1_grf_reg = RV1126_GRF_DSIPHY_CON,
.lanecfg1 = HIWORD_UPDATE(0, RV1126_DSI_TURNDISABLE |
RV1126_DSI_FORCERXMODE |
RV1126_DSI_FORCETXSTOPMODE),
.max_data_lanes = 4,
},
{ /* sentinel */ }
};
static const struct of_device_id dw_mipi_dsi_rockchip_dt_ids[] = {
{
.compatible = "rockchip,px30-mipi-dsi",
@ -1664,6 +1681,9 @@ static const struct of_device_id dw_mipi_dsi_rockchip_dt_ids[] = {
}, {
.compatible = "rockchip,rk3568-mipi-dsi",
.data = &rk3568_chip_data,
}, {
.compatible = "rockchip,rv1126-mipi-dsi",
.data = &rv1126_chip_data,
},
{ /* sentinel */ }
};

View File

@ -765,11 +765,6 @@ out:
}
}
static void vop_plane_destroy(struct drm_plane *plane)
{
drm_plane_cleanup(plane);
}
static inline bool rockchip_afbc(u64 modifier)
{
return modifier == ROCKCHIP_AFBC_MOD;
@ -1131,7 +1126,7 @@ static const struct drm_plane_helper_funcs plane_helper_funcs = {
static const struct drm_plane_funcs vop_plane_funcs = {
.update_plane = drm_atomic_helper_update_plane,
.disable_plane = drm_atomic_helper_disable_plane,
.destroy = vop_plane_destroy,
.destroy = drm_plane_cleanup,
.reset = drm_atomic_helper_plane_reset,
.atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_plane_destroy_state,
@ -1602,11 +1597,6 @@ static const struct drm_crtc_helper_funcs vop_crtc_helper_funcs = {
.atomic_disable = vop_crtc_atomic_disable,
};
static void vop_crtc_destroy(struct drm_crtc *crtc)
{
drm_crtc_cleanup(crtc);
}
static struct drm_crtc_state *vop_crtc_duplicate_state(struct drm_crtc *crtc)
{
struct rockchip_crtc_state *rockchip_state;
@ -1614,7 +1604,8 @@ static struct drm_crtc_state *vop_crtc_duplicate_state(struct drm_crtc *crtc)
if (WARN_ON(!crtc->state))
return NULL;
rockchip_state = kzalloc(sizeof(*rockchip_state), GFP_KERNEL);
rockchip_state = kmemdup(to_rockchip_crtc_state(crtc->state),
sizeof(*rockchip_state), GFP_KERNEL);
if (!rockchip_state)
return NULL;
@ -1639,7 +1630,10 @@ static void vop_crtc_reset(struct drm_crtc *crtc)
if (crtc->state)
vop_crtc_destroy_state(crtc, crtc->state);
__drm_atomic_helper_crtc_reset(crtc, &crtc_state->base);
if (crtc_state)
__drm_atomic_helper_crtc_reset(crtc, &crtc_state->base);
else
__drm_atomic_helper_crtc_reset(crtc, NULL);
}
#ifdef CONFIG_DRM_ANALOGIX_DP
@ -1710,7 +1704,7 @@ vop_crtc_verify_crc_source(struct drm_crtc *crtc, const char *source_name,
static const struct drm_crtc_funcs vop_crtc_funcs = {
.set_config = drm_atomic_helper_set_config,
.page_flip = drm_atomic_helper_page_flip,
.destroy = vop_crtc_destroy,
.destroy = drm_crtc_cleanup,
.reset = vop_crtc_reset,
.atomic_duplicate_state = vop_crtc_duplicate_state,
.atomic_destroy_state = vop_crtc_destroy_state,
@ -1961,7 +1955,7 @@ static void vop_destroy_crtc(struct vop *vop)
*/
list_for_each_entry_safe(plane, tmp, &drm_dev->mode_config.plane_list,
head)
vop_plane_destroy(plane);
drm_plane_cleanup(plane);
/*
* Destroy CRTC after vop_plane_destroy() since vop_disable_plane()

View File

@ -2079,30 +2079,15 @@ static const struct drm_crtc_helper_funcs vop2_crtc_helper_funcs = {
.atomic_disable = vop2_crtc_atomic_disable,
};
static void vop2_crtc_reset(struct drm_crtc *crtc)
{
struct rockchip_crtc_state *vcstate = to_rockchip_crtc_state(crtc->state);
if (crtc->state) {
__drm_atomic_helper_crtc_destroy_state(crtc->state);
kfree(vcstate);
}
vcstate = kzalloc(sizeof(*vcstate), GFP_KERNEL);
if (!vcstate)
return;
crtc->state = &vcstate->base;
crtc->state->crtc = crtc;
}
static struct drm_crtc_state *vop2_crtc_duplicate_state(struct drm_crtc *crtc)
{
struct rockchip_crtc_state *vcstate, *old_vcstate;
struct rockchip_crtc_state *vcstate;
old_vcstate = to_rockchip_crtc_state(crtc->state);
if (WARN_ON(!crtc->state))
return NULL;
vcstate = kmemdup(old_vcstate, sizeof(*old_vcstate), GFP_KERNEL);
vcstate = kmemdup(to_rockchip_crtc_state(crtc->state),
sizeof(*vcstate), GFP_KERNEL);
if (!vcstate)
return NULL;
@ -2120,6 +2105,20 @@ static void vop2_crtc_destroy_state(struct drm_crtc *crtc,
kfree(vcstate);
}
static void vop2_crtc_reset(struct drm_crtc *crtc)
{
struct rockchip_crtc_state *vcstate =
kzalloc(sizeof(*vcstate), GFP_KERNEL);
if (crtc->state)
vop2_crtc_destroy_state(crtc, crtc->state);
if (vcstate)
__drm_atomic_helper_crtc_reset(crtc, &vcstate->base);
else
__drm_atomic_helper_crtc_reset(crtc, NULL);
}
static const struct drm_crtc_funcs vop2_crtc_funcs = {
.set_config = drm_atomic_helper_set_config,
.page_flip = drm_atomic_helper_page_flip,

View File

@ -1120,6 +1120,59 @@ static const struct vop_data rk3328_vop = {
.max_output = { 4096, 2160 },
};
static const struct vop_common rv1126_common = {
.standby = VOP_REG_SYNC(PX30_SYS_CTRL2, 0x1, 1),
.out_mode = VOP_REG(PX30_DSP_CTRL2, 0xf, 16),
.dsp_blank = VOP_REG(PX30_DSP_CTRL2, 0x1, 14),
.dither_down_en = VOP_REG(PX30_DSP_CTRL2, 0x1, 8),
.dither_down_sel = VOP_REG(PX30_DSP_CTRL2, 0x1, 7),
.dither_down_mode = VOP_REG(PX30_DSP_CTRL2, 0x1, 6),
.cfg_done = VOP_REG_SYNC(PX30_REG_CFG_DONE, 0x1, 0),
.dither_up = VOP_REG(PX30_DSP_CTRL2, 0x1, 2),
.dsp_lut_en = VOP_REG(PX30_DSP_CTRL2, 0x1, 5),
.gate_en = VOP_REG(PX30_DSP_CTRL2, 0x1, 0),
};
static const struct vop_modeset rv1126_modeset = {
.htotal_pw = VOP_REG(PX30_DSP_HTOTAL_HS_END, 0x0fff0fff, 0),
.hact_st_end = VOP_REG(PX30_DSP_HACT_ST_END, 0x0fff0fff, 0),
.vtotal_pw = VOP_REG(PX30_DSP_VTOTAL_VS_END, 0x0fff0fff, 0),
.vact_st_end = VOP_REG(PX30_DSP_VACT_ST_END, 0x0fff0fff, 0),
};
static const struct vop_output rv1126_output = {
.rgb_dclk_pol = VOP_REG(PX30_DSP_CTRL0, 0x1, 1),
.rgb_pin_pol = VOP_REG(PX30_DSP_CTRL0, 0x7, 2),
.rgb_en = VOP_REG(PX30_DSP_CTRL0, 0x1, 0),
.mipi_dclk_pol = VOP_REG(PX30_DSP_CTRL0, 0x1, 25),
.mipi_pin_pol = VOP_REG(PX30_DSP_CTRL0, 0x7, 26),
.mipi_en = VOP_REG(PX30_DSP_CTRL0, 0x1, 24),
};
static const struct vop_misc rv1126_misc = {
.global_regdone_en = VOP_REG(PX30_SYS_CTRL2, 0x1, 13),
};
static const struct vop_win_data rv1126_vop_win_data[] = {
{ .base = 0x00, .phy = &px30_win0_data,
.type = DRM_PLANE_TYPE_OVERLAY },
{ .base = 0x00, .phy = &px30_win2_data,
.type = DRM_PLANE_TYPE_PRIMARY },
};
static const struct vop_data rv1126_vop = {
.version = VOP_VERSION(2, 0xb),
.intr = &px30_intr,
.common = &rv1126_common,
.modeset = &rv1126_modeset,
.output = &rv1126_output,
.misc = &rv1126_misc,
.win = rv1126_vop_win_data,
.win_size = ARRAY_SIZE(rv1126_vop_win_data),
.max_output = { 1920, 1080 },
.lut_size = 1024,
};
static const struct of_device_id vop_driver_dt_match[] = {
{ .compatible = "rockchip,rk3036-vop",
.data = &rk3036_vop },
@ -1147,6 +1200,8 @@ static const struct of_device_id vop_driver_dt_match[] = {
.data = &rk3228_vop },
{ .compatible = "rockchip,rk3328-vop",
.data = &rk3328_vop },
{ .compatible = "rockchip,rv1126-vop",
.data = &rv1126_vop },
{},
};
MODULE_DEVICE_TABLE(of, vop_driver_dt_match);

View File

@ -272,8 +272,8 @@ static int ssd130x_pwm_enable(struct ssd130x_device *ssd130x)
/* Enable the PWM */
pwm_enable(ssd130x->pwm);
dev_dbg(dev, "Using PWM%d with a %lluns period.\n",
ssd130x->pwm->pwm, pwm_get_period(ssd130x->pwm));
dev_dbg(dev, "Using PWM %s with a %lluns period.\n",
ssd130x->pwm->label, pwm_get_period(ssd130x->pwm));
return 0;
}
@ -553,14 +553,45 @@ static int ssd130x_update_rect(struct ssd130x_device *ssd130x,
static void ssd130x_clear_screen(struct ssd130x_device *ssd130x,
struct ssd130x_plane_state *ssd130x_state)
{
struct drm_rect fullscreen = {
.x1 = 0,
.x2 = ssd130x->width,
.y1 = 0,
.y2 = ssd130x->height,
};
unsigned int page_height = ssd130x->device_info->page_height;
unsigned int pages = DIV_ROUND_UP(ssd130x->height, page_height);
u8 *data_array = ssd130x_state->data_array;
unsigned int width = ssd130x->width;
int ret, i;
ssd130x_update_rect(ssd130x, ssd130x_state, &fullscreen);
if (!ssd130x->page_address_mode) {
memset(data_array, 0, width * pages);
/* Set address range for horizontal addressing mode */
ret = ssd130x_set_col_range(ssd130x, ssd130x->col_offset, width);
if (ret < 0)
return;
ret = ssd130x_set_page_range(ssd130x, ssd130x->page_offset, pages);
if (ret < 0)
return;
/* Write out update in one go if we aren't using page addressing mode */
ssd130x_write_data(ssd130x, data_array, width * pages);
} else {
/*
* In page addressing mode, the start address needs to be reset,
* and each page then needs to be written out separately.
*/
memset(data_array, 0, width);
for (i = 0; i < pages; i++) {
ret = ssd130x_set_page_pos(ssd130x,
ssd130x->page_offset + i,
ssd130x->col_offset);
if (ret < 0)
return;
ret = ssd130x_write_data(ssd130x, data_array, width);
if (ret < 0)
return;
}
}
}
static int ssd130x_fb_blit_rect(struct drm_plane_state *state,

View File

@ -40,8 +40,8 @@ struct ssd130x_deviceinfo {
u32 default_width;
u32 default_height;
u32 page_height;
int need_pwm;
int need_chargepump;
bool need_pwm;
bool need_chargepump;
bool page_mode_only;
};

View File

@ -1746,8 +1746,15 @@ static void tegra_dc_early_unregister(struct drm_crtc *crtc)
unsigned int count = ARRAY_SIZE(debugfs_files);
struct drm_minor *minor = crtc->dev->primary;
struct tegra_dc *dc = to_tegra_dc(crtc);
struct dentry *root;
drm_debugfs_remove_files(dc->debugfs_files, count, minor);
#ifdef CONFIG_DEBUG_FS
root = crtc->debugfs_entry;
#else
root = NULL;
#endif
drm_debugfs_remove_files(dc->debugfs_files, count, root, minor);
kfree(dc->debugfs_files);
dc->debugfs_files = NULL;
}

View File

@ -256,6 +256,7 @@ static void tegra_dsi_early_unregister(struct drm_connector *connector)
struct tegra_dsi *dsi = to_dsi(output);
drm_debugfs_remove_files(dsi->debugfs_files, count,
connector->debugfs_entry,
connector->dev->primary);
kfree(dsi->debugfs_files);
dsi->debugfs_files = NULL;

View File

@ -1116,7 +1116,8 @@ static void tegra_hdmi_early_unregister(struct drm_connector *connector)
unsigned int count = ARRAY_SIZE(debugfs_files);
struct tegra_hdmi *hdmi = to_hdmi(output);
drm_debugfs_remove_files(hdmi->debugfs_files, count, minor);
drm_debugfs_remove_files(hdmi->debugfs_files, count,
connector->debugfs_entry, minor);
kfree(hdmi->debugfs_files);
hdmi->debugfs_files = NULL;
}

View File

@ -1708,6 +1708,7 @@ static void tegra_sor_early_unregister(struct drm_connector *connector)
struct tegra_sor *sor = to_sor(output);
drm_debugfs_remove_files(sor->debugfs_files, count,
connector->debugfs_entry,
connector->dev->primary);
kfree(sor->debugfs_files);
sor->debugfs_files = NULL;

File diff suppressed because it is too large Load Diff

View File

@ -949,7 +949,7 @@ static int repaper_probe(struct spi_device *spi)
match = device_get_match_data(dev);
if (match) {
model = (enum repaper_model)match;
model = (enum repaper_model)(uintptr_t)match;
} else {
spi_id = spi_get_device_id(spi);
model = (enum repaper_model)spi_id->driver_data;

View File

@ -344,8 +344,6 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev,
struct virtio_gpu_object *obj,
struct virtio_gpu_mem_entry *ents,
unsigned int nents);
int virtio_gpu_attach_status_page(struct virtio_gpu_device *vgdev);
int virtio_gpu_detach_status_page(struct virtio_gpu_device *vgdev);
void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev,
struct virtio_gpu_output *output);
int virtio_gpu_cmd_get_display_info(struct virtio_gpu_device *vgdev);
@ -394,11 +392,8 @@ virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev,
struct virtio_gpu_fence *fence);
void virtio_gpu_ctrl_ack(struct virtqueue *vq);
void virtio_gpu_cursor_ack(struct virtqueue *vq);
void virtio_gpu_fence_ack(struct virtqueue *vq);
void virtio_gpu_dequeue_ctrl_func(struct work_struct *work);
void virtio_gpu_dequeue_cursor_func(struct work_struct *work);
void virtio_gpu_dequeue_fence_func(struct work_struct *work);
void virtio_gpu_notify(struct virtio_gpu_device *vgdev);
int
@ -465,8 +460,6 @@ struct dma_buf *virtgpu_gem_prime_export(struct drm_gem_object *obj,
int flags);
struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev,
struct dma_buf *buf);
int virtgpu_gem_prime_get_uuid(struct drm_gem_object *obj,
uuid_t *uuid);
struct drm_gem_object *virtgpu_gem_prime_import_sg_table(
struct drm_device *dev, struct dma_buf_attachment *attach,
struct sg_table *sgt);

View File

@ -878,11 +878,7 @@ config HID_PICOLCD_FB
default !EXPERT
depends on HID_PICOLCD
depends on HID_PICOLCD=FB || FB=y
select FB_DEFERRED_IO
select FB_SYS_FILLRECT
select FB_SYS_COPYAREA
select FB_SYS_IMAGEBLIT
select FB_SYS_FOPS
select FB_SYSMEM_HELPERS_DEFERRED
help
Provide access to PicoLCD's 256x64 monochrome display via a
framebuffer device.
@ -1044,7 +1040,7 @@ config HID_SONY
* Guitar Hero PS3 and PC guitar dongles
config SONY_FF
bool "Sony PS2/3/4 accessories force feedback support"
bool "Sony PS2/3/4 accessories force feedback support"
depends on HID_SONY
select INPUT_FF_MEMLESS
help

View File

@ -283,54 +283,6 @@ out:
mutex_unlock(&info->lock);
}
/* Stub to call the system default and update the image on the picoLCD */
static void picolcd_fb_fillrect(struct fb_info *info,
const struct fb_fillrect *rect)
{
if (!info->par)
return;
sys_fillrect(info, rect);
schedule_delayed_work(&info->deferred_work, 0);
}
/* Stub to call the system default and update the image on the picoLCD */
static void picolcd_fb_copyarea(struct fb_info *info,
const struct fb_copyarea *area)
{
if (!info->par)
return;
sys_copyarea(info, area);
schedule_delayed_work(&info->deferred_work, 0);
}
/* Stub to call the system default and update the image on the picoLCD */
static void picolcd_fb_imageblit(struct fb_info *info, const struct fb_image *image)
{
if (!info->par)
return;
sys_imageblit(info, image);
schedule_delayed_work(&info->deferred_work, 0);
}
/*
* this is the slow path from userspace. they can seek and write to
* the fb. it's inefficient to do anything less than a full screen draw
*/
static ssize_t picolcd_fb_write(struct fb_info *info, const char __user *buf,
size_t count, loff_t *ppos)
{
ssize_t ret;
if (!info->par)
return -ENODEV;
ret = fb_sys_write(info, buf, count, ppos);
if (ret >= 0)
schedule_delayed_work(&info->deferred_work, 0);
return ret;
}
static int picolcd_fb_blank(int blank, struct fb_info *info)
{
/* We let fb notification do this for us via lcd/backlight device */
@ -417,18 +369,31 @@ static int picolcd_set_par(struct fb_info *info)
return 0;
}
static void picolcdfb_ops_damage_range(struct fb_info *info, off_t off, size_t len)
{
if (!info->par)
return;
schedule_delayed_work(&info->deferred_work, 0);
}
static void picolcdfb_ops_damage_area(struct fb_info *info, u32 x, u32 y, u32 width, u32 height)
{
if (!info->par)
return;
schedule_delayed_work(&info->deferred_work, 0);
}
FB_GEN_DEFAULT_DEFERRED_SYSMEM_OPS(picolcdfb_ops,
picolcdfb_ops_damage_range,
picolcdfb_ops_damage_area)
static const struct fb_ops picolcdfb_ops = {
.owner = THIS_MODULE,
FB_DEFAULT_DEFERRED_OPS(picolcdfb_ops),
.fb_destroy = picolcd_fb_destroy,
.fb_read = fb_sys_read,
.fb_write = picolcd_fb_write,
.fb_blank = picolcd_fb_blank,
.fb_fillrect = picolcd_fb_fillrect,
.fb_copyarea = picolcd_fb_copyarea,
.fb_imageblit = picolcd_fb_imageblit,
.fb_check_var = picolcd_fb_check_var,
.fb_set_par = picolcd_set_par,
.fb_mmap = fb_deferred_io_mmap,
};

View File

@ -4,12 +4,8 @@ menuconfig FB_TFT
depends on FB && SPI
depends on FB_DEVICE
depends on GPIOLIB || COMPILE_TEST
select FB_SYS_FILLRECT
select FB_SYS_COPYAREA
select FB_SYS_IMAGEBLIT
select FB_SYS_FOPS
select FB_DEFERRED_IO
select FB_BACKLIGHT
select FB_SYSMEM_HELPERS_DEFERRED
config FB_TFT_AGM1264K_FL
tristate "FB driver for the AGM1264K-FL LCD display"

View File

@ -357,61 +357,6 @@ static void fbtft_deferred_io(struct fb_info *info, struct list_head *pagereflis
dirty_lines_start, dirty_lines_end);
}
static void fbtft_fb_fillrect(struct fb_info *info,
const struct fb_fillrect *rect)
{
struct fbtft_par *par = info->par;
dev_dbg(info->dev,
"%s: dx=%d, dy=%d, width=%d, height=%d\n",
__func__, rect->dx, rect->dy, rect->width, rect->height);
sys_fillrect(info, rect);
par->fbtftops.mkdirty(info, rect->dy, rect->height);
}
static void fbtft_fb_copyarea(struct fb_info *info,
const struct fb_copyarea *area)
{
struct fbtft_par *par = info->par;
dev_dbg(info->dev,
"%s: dx=%d, dy=%d, width=%d, height=%d\n",
__func__, area->dx, area->dy, area->width, area->height);
sys_copyarea(info, area);
par->fbtftops.mkdirty(info, area->dy, area->height);
}
static void fbtft_fb_imageblit(struct fb_info *info,
const struct fb_image *image)
{
struct fbtft_par *par = info->par;
dev_dbg(info->dev,
"%s: dx=%d, dy=%d, width=%d, height=%d\n",
__func__, image->dx, image->dy, image->width, image->height);
sys_imageblit(info, image);
par->fbtftops.mkdirty(info, image->dy, image->height);
}
static ssize_t fbtft_fb_write(struct fb_info *info, const char __user *buf,
size_t count, loff_t *ppos)
{
struct fbtft_par *par = info->par;
ssize_t res;
dev_dbg(info->dev,
"%s: count=%zd, ppos=%llu\n", __func__, count, *ppos);
res = fb_sys_write(info, buf, count, ppos);
/* TODO: only mark changed area update all for now */
par->fbtftops.mkdirty(info, -1, 0);
return res;
}
/* from pxafb.c */
static unsigned int chan_to_field(unsigned int chan, struct fb_bitfield *bf)
{
@ -473,6 +418,32 @@ static int fbtft_fb_blank(int blank, struct fb_info *info)
return ret;
}
static void fbtft_ops_damage_range(struct fb_info *info, off_t off, size_t len)
{
struct fbtft_par *par = info->par;
/* TODO: only mark changed area update all for now */
par->fbtftops.mkdirty(info, -1, 0);
}
static void fbtft_ops_damage_area(struct fb_info *info, u32 x, u32 y, u32 width, u32 height)
{
struct fbtft_par *par = info->par;
par->fbtftops.mkdirty(info, y, height);
}
FB_GEN_DEFAULT_DEFERRED_SYSMEM_OPS(fbtft_ops,
fbtft_ops_damage_range,
fbtft_ops_damage_area)
static const struct fb_ops fbtft_ops = {
.owner = THIS_MODULE,
FB_DEFAULT_DEFERRED_OPS(fbtft_ops),
.fb_setcolreg = fbtft_fb_setcolreg,
.fb_blank = fbtft_fb_blank,
};
static void fbtft_merge_fbtftops(struct fbtft_ops *dst, struct fbtft_ops *src)
{
if (src->write)
@ -521,7 +492,6 @@ static void fbtft_merge_fbtftops(struct fbtft_ops *dst, struct fbtft_ops *src)
* Creates a new frame buffer info structure.
*
* Also creates and populates the following structures:
* info->fbops
* info->fbdefio
* info->pseudo_palette
* par->fbtftops
@ -536,7 +506,6 @@ struct fb_info *fbtft_framebuffer_alloc(struct fbtft_display *display,
{
struct fb_info *info;
struct fbtft_par *par;
struct fb_ops *fbops = NULL;
struct fb_deferred_io *fbdefio = NULL;
u8 *vmem = NULL;
void *txbuf = NULL;
@ -611,10 +580,6 @@ struct fb_info *fbtft_framebuffer_alloc(struct fbtft_display *display,
if (!vmem)
goto alloc_fail;
fbops = devm_kzalloc(dev, sizeof(struct fb_ops), GFP_KERNEL);
if (!fbops)
goto alloc_fail;
fbdefio = devm_kzalloc(dev, sizeof(struct fb_deferred_io), GFP_KERNEL);
if (!fbdefio)
goto alloc_fail;
@ -638,19 +603,9 @@ struct fb_info *fbtft_framebuffer_alloc(struct fbtft_display *display,
goto alloc_fail;
info->screen_buffer = vmem;
info->fbops = fbops;
info->fbops = &fbtft_ops;
info->fbdefio = fbdefio;
fbops->owner = dev->driver->owner;
fbops->fb_read = fb_sys_read;
fbops->fb_write = fbtft_fb_write;
fbops->fb_fillrect = fbtft_fb_fillrect;
fbops->fb_copyarea = fbtft_fb_copyarea;
fbops->fb_imageblit = fbtft_fb_imageblit;
fbops->fb_setcolreg = fbtft_fb_setcolreg;
fbops->fb_blank = fbtft_fb_blank;
fbops->fb_mmap = fb_deferred_io_mmap;
fbdefio->delay = HZ / fps;
fbdefio->sort_pagereflist = true;
fbdefio->deferred_io = fbtft_deferred_io;

View File

@ -518,21 +518,23 @@ config FB_SBUS
help
Say Y if you want support for SBUS or UPA based frame buffer device.
config FB_SBUS_HELPERS
bool
select FB_CFB_COPYAREA
select FB_CFB_FILLRECT
select FB_CFB_IMAGEBLIT
config FB_BW2
bool "BWtwo support"
depends on (FB = y) && (SPARC && FB_SBUS)
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
select FB_SBUS_HELPERS
help
This is the frame buffer device driver for the BWtwo frame buffer.
config FB_CG3
bool "CGthree support"
depends on (FB = y) && (SPARC && FB_SBUS)
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
select FB_SBUS_HELPERS
help
This is the frame buffer device driver for the CGthree frame buffer.
@ -557,9 +559,7 @@ config FB_FFB
config FB_TCX
bool "TCX (SS4/SS5 only) support"
depends on FB_SBUS
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
select FB_SBUS_HELPERS
help
This is the frame buffer device driver for the TCX 24/8bit frame
buffer.
@ -567,9 +567,7 @@ config FB_TCX
config FB_CG14
bool "CGfourteen (SX) support"
depends on FB_SBUS
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
select FB_SBUS_HELPERS
help
This is the frame buffer device driver for the CGfourteen frame
buffer on Desktop SPARCsystems with the SX graphics option.
@ -577,9 +575,7 @@ config FB_CG14
config FB_P9100
bool "P9100 (Sparcbook 3 only) support"
depends on FB_SBUS
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
select FB_SBUS_HELPERS
help
This is the frame buffer device driver for the P9100 card
supported on Sparcbook 3 machines.
@ -587,9 +583,7 @@ config FB_P9100
config FB_LEO
bool "Leo (ZX) support"
depends on FB_SBUS
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
select FB_SBUS_HELPERS
help
This is the frame buffer device driver for the SBUS-based Sun ZX
(leo) frame buffer cards.
@ -1900,11 +1894,8 @@ config FB_BROADSHEET
config FB_HYPERV
tristate "Microsoft Hyper-V Synthetic Video support"
depends on FB && HYPERV
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
select FB_DEFERRED_IO
select DMA_CMA if HAVE_DMA_CONTIGUOUS && CMA
select FB_IOMEM_HELPERS_DEFERRED
select VIDEO_NOMODESET
help
This framebuffer driver supports Microsoft Hyper-V Synthetic Video.

View File

@ -8,6 +8,7 @@
obj-y += core/
obj-$(CONFIG_FB_MACMODES) += macmodes.o
obj-$(CONFIG_FB_SBUS) += sbuslib.o
obj-$(CONFIG_FB_WMT_GE_ROPS) += wmt_ge_rops.o
# Hardware specific drivers go first
@ -45,14 +46,14 @@ obj-$(CONFIG_FB_LE80578) += vermilion/
obj-$(CONFIG_FB_S3) += s3fb.o
obj-$(CONFIG_FB_ARK) += arkfb.o
obj-$(CONFIG_FB_STI) += stifb.o
obj-$(CONFIG_FB_FFB) += ffb.o sbuslib.o
obj-$(CONFIG_FB_CG6) += cg6.o sbuslib.o
obj-$(CONFIG_FB_CG3) += cg3.o sbuslib.o
obj-$(CONFIG_FB_BW2) += bw2.o sbuslib.o
obj-$(CONFIG_FB_CG14) += cg14.o sbuslib.o
obj-$(CONFIG_FB_P9100) += p9100.o sbuslib.o
obj-$(CONFIG_FB_TCX) += tcx.o sbuslib.o
obj-$(CONFIG_FB_LEO) += leo.o sbuslib.o
obj-$(CONFIG_FB_FFB) += ffb.o
obj-$(CONFIG_FB_CG6) += cg6.o
obj-$(CONFIG_FB_CG3) += cg3.o
obj-$(CONFIG_FB_BW2) += bw2.o
obj-$(CONFIG_FB_CG14) += cg14.o
obj-$(CONFIG_FB_P9100) += p9100.o
obj-$(CONFIG_FB_TCX) += tcx.o
obj-$(CONFIG_FB_LEO) += leo.o
obj-$(CONFIG_FB_ACORN) += acornfb.o
obj-$(CONFIG_FB_ATARI) += atafb.o c2p_iplan2.o atafb_mfb.o \
atafb_iplan2p2.o atafb_iplan2p4.o atafb_iplan2p8.o

Some files were not shown because too many files have changed in this diff Show More