Changes between Version 6 and Version 7 of OpponentModels


Ignore:
Timestamp:
04/17/11 13:32:11 (14 years ago)
Author:
mark
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • OpponentModels

    v6 v7  
    1515||'''Subject(s)'''||Benefits of learning, Bayesian learning, reservation values||
    1616||'''Summary'''|| Growing interest in e-commerce motivates research in automated negotiation. Building intelligent negotiation agents is still[[br]] emerging. In contrast to most negotiation models, sequential decision model allows for learning. Learning can help understand[[br]] human behaviour, but can also result in better results for the learning party. Bayesian learning of reservation[[br]] values can be used to determine the zone of agreement for an issue based on the domain knowledge and bidding interactions.[[br]] Concluding for one-issue, learning positively influences bargaining quality, number of exchanged proposals,[[br]] and leads to a better compromise if both learn. Learning works always works better in the proposed case.||
    17 ||'''Relevance'''||8||
     17||'''Relevance'''||8. Strong example of Bayesian learning||
    1818||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:omTF-8TbGE4J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    1919[[BR]]
     
    2222||'''Author(s)'''||R.M. Coehoorn, N.R. Jennings||
    2323||'''Subject(s)'''||KDE Learning, Negotiation model, Concession based strategy||
    24 ||'''Summary'''|| ||
    25 ||'''Relevance'''|| ||
     24||'''Summary'''|| Effective and efficient multi-issue negotiation requires an agent to have some indication of it's opponent's preferences [[br]]over the issues in the domain. Kernel Density Estimation (KDE) is used to estimate the weight attached to different issues [[br]]by different agents. It is assumed that if the value of an issue increases, that this is positive for one agent, and negative [[br]]for the other. No assumptions about relation between time, negotiation history and issue-weight are required, in contrast [[br]]to Bayesian learning. The difference between concessive (counter)offers is used to estimate the weights of the issues [[br]] (assumption: stronger concessions are made later on in the negotiation). Faratin's hill climbing algorithm augmented with KDE is [[br]]used to propose the next bid. KDE proved succesful on the used negotiation model. Future works entails testing the approach [[br]]against different opponent strategies and extending the approach to other negotiation models (see assumption in summary). ||
     25||'''Relevance'''||9. KDE learning described in detail||
    2626||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:Z79P04-IRS0J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    2727[[BR]]