Orin Kerr raises an interesting question: when can machines be treated like people? He cites two cases.
In the first, a poorly-programmed ATM machine in Australia incorrectly dispensed money to a man who purposefully exploited its programming flaw. The man then argued that the bank, through the ATM, gave its consent for him to take the money, and that the money wasn't stolen. The Austrailian High Court rejected his defense and said that machines can't give consent.
In the second case, the American government had the phone company monitor the numbers dialed by a criminal suspect so they could record the conversations when he dialed certain numbers. The suspect argued that this was an invasion of privacy, but the Supreme Court rejected his appeal on the basis that if he had been using an old-style phone with an operator on the other end rather than a computer, he would have had no expectation of privacy and that operator could have legally told the police every number that was dialed. The SCOTUS said that, "We are not inclined to hold that a different constitutional result is required because the telephone company has decided to automate."
So in the first case, the ATM could not give consent for the thief to take the bank's money, but in the second case the presence of a machine serving in a capacity that used to be filled by a human could prevent an expectation of privacy. Fascinating stuff, and certain to become even more important as computers continue to replace humans.