[i2c] mixed-speed I2C system

Jean Delvare khali at linux-fr.org
Tue Apr 3 10:27:39 CEST 2007


Hi Nishanth, Stanley,

On Mon, 2 Apr 2007 18:44:58 -0500, nishanth menon wrote:
> Hi Stanley,
> On 4/2/07, Stanley Cai <stanley.w.cai at gmail.com> wrote:
> > > Are you suggesting that the device registration also provide the device
> > > speed and we program the i2c_clk speeds run-time (@ xfer_msg) as being
> > > dependent on the device?
> >
> > max supported clock may be an attribute of i2c clients and i2c masters
> > so the i2c framework can find out the correct clock for the whole i2c
> > bus.

Not really. For a given hardware setup, the max speed is a physical
attribute of the whole bus, basically defined as the minimum of the
maximum speed supported by each device. This holds whether or not a
device has been created in the device driver model. So attaching a
speed value to the devices will not cover all the possible cases. It
will fail as soon as there are additional devices connected on the bus
for which Linux doesn't have/need a driver, and in particular in the
multi-master case.

So I don't think that there is any way to enable fast-mode I2C
automatically. Instead I believe we need to default to standard I2C (as
we do now) and let the user explicitly switch to fast-mode I2C if
he/she knows that his/her bus setup will support it. I was thinking of
a sysfs attribute. I don't know if we also need an in-kernel interface.

In that model, we could still attach a max speed to each device, and
prevent the user from setting a speed which is known not to work.
However, given that we cannot guarantee that values below that limit
will actually work, I wonder if it's really worth it.

> hmm.. two considerations:
> 1. How can a client tell the adapter of a maximum supported transfer
> speed? esp since i2c-dev would use a direct i2c_msg transfer.
> this could be an argument for having speed as a i2c_msg param..just a
> suggestion..

i2c-dev uses clients, it does not do direct i2c_msg transfers. Thus any
attribute attached to regular clients can be attached to i2c-dev as
well, using ioctls. We already do that for 10-bit addresses and SMBus
PEC support, and I think this is also how HS-mode would be supported.

Note that i2c-dev may need some rework to fit better in the new model
implemented by David Brownell.

> 2. What is the standard way of passing the maximum supported frequency
> for a specific adaptor?

There is no standard way at the moment, thus this discussion ;)

> As a specific example of OMAP(the controller can support all
> frequencies), I was thinking more of using resource array
> (flags=IORESOURCE_IO) while registering a platform_device in the board
> specific file.
> a) if the adapter is capable, a range defined by .start and .end
> b) specific speed support only (with a .end=400 or .end=3400).
> This shall give the adaptor driver a means to limit msg speed requests.

This looks like an abuse of struct resource. Speed is an attribute, not
a resource.

I think we need to focus on the actual needs. Is there any practical
reason to run at lower frequencies, except for the boundaries between
modes? The only case I can think of is masters which cannot sense the
SCL line and thus cannot support SCL line stretching. Some i2c-algo-bit
adapters are like that, which is one of the reasons why the speed of
i2c-algo-bit-based adapters can be chosen freely. For these, I don't
think we want to consider fast-mode nor HS-mode anyway. I don't know if
there are hardware implementations with the same problem. For the other
cases, it sounds like all we need to know is:
* For slaves, can they do HS-mode?
* For masters, can they do HS-mode and can they to fast-mode?
And always pick the highest frequency that fits into a given mode.
HS-mode can be selected automatically, fast-mode cannot.

So we can do fine-grained, half-automatic speed selection, or we can
simplify things down to two flags. Until I see real-world scenarii, I
just don't know what will work best.

> > Russell King has added I2C slave support at least for PXA...

I'm curious. How does it work? Does this means that there is no need
for cooperation from i2c-core to implement I2C slave support in I2C bus
drivers? Or is i2c-pxa implementing things which should be moved to
i2c-core? This discussion probably belongs to a different thread
though, I doesn't seem to be related to bus speed issues at all.

-- 
Jean Delvare



More information about the i2c mailing list