Question 1170818
.
There are no equations given, I'm not sure where to start or how to prove this.


A company claims to have invented a device that can measure the momentum of objects inside it with extreme
accuracy. The device fits within a matchbox, and the claimed precision with which it can measure momentum is
δp = ±10^−26kg ms^−1
.
Explain why the claimed performance cannot possibly be accurate, and estimate the smallest possible size of a
device with such momentum precision, according to laws of quantum physics.
~~~~~~~~~~~~~~~~~~~~~~~~~



        In this post,  the problem is posed in wrong way,  so it is either 

        a  EXTREME  stupidity,  or an  EROOR,  or a trap  (like a provocation).



<pre>
From the uncertainty principle, the uncertainty in determining the position is

                Δx ≥ {{{5.27*10^(-9)}}} meters.


This uncertainty is  MUCH-MUCH-much-much less that the size of a matchbox.



    THEREFORE, in this problem, the uncertainty principle of the quantum mechanics 
    PROHIBITS for the device to have the size less than  {{{5.27*10^(-9)}}} meters,
    but DOES NOT prohibit for the device to have a greater size, like a matchbox.


  
HENCE, as a CONCLUSION, a device in this problem, which provides the given precision, 
EASILY may have a size of a matchbox - nothing from quantum mechanics  prevents it.
</pre>

In his post, &nbsp;@CPhill puffs out his cheeks and tries to play a role of an expert.
He uses a lot of words and tries to obfuscate the question, &nbsp;but does not give a direct answer.


So, &nbsp;for the safety of your mind, &nbsp;IGNORE &nbsp;the post by @CPhill.


Also, &nbsp;ignore the problem itself, &nbsp;since it is posed in WRONG WAY.



\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\



&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Regarding the post by @CPhill . . . 



Keep in mind that @CPhill is a pseudonym for the Google artificial intelligence.


The artificial intelligence is like a baby now. It is in the experimental stage 
of development and can make mistakes and produce nonsense without any embarrassment.



&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;It has no feeling of shame - it is shameless.



This time, again, &nbsp;it made an error.



Although the @CPhill' solution are copy-paste &nbsp;Google &nbsp;AI solutions, &nbsp;there is one essential difference.


Every time, &nbsp;Google &nbsp;AI &nbsp;makes a note at the end of its solutions that &nbsp;Google &nbsp;AI &nbsp;is experimental
and can make errors/mistakes.


All @CPhill' solutions are copy-paste of &nbsp;Google &nbsp;AI &nbsp;solutions, with one difference:
@PChill never makes this notice and never says that his solutions are copy-past that of Google.
So, he NEVER SAYS TRUTH.


Every time, &nbsp;@CPhill embarrassed to tell the truth.

But I am not embarrassing to tell the truth, &nbsp;as it is my duty at this forum.



And the last my comment.


When you obtain such posts from @CPhill, &nbsp;remember, &nbsp;that &nbsp;NOBODY &nbsp;is responsible for their correctness, 
until the specialists and experts will check and confirm their correctness.


Without it, &nbsp;their reliability is &nbsp;ZERO and their creadability is &nbsp;ZERO, &nbsp;too.