Changes between Version 125 and Version 126 of OpponentModels


Ignore:
Timestamp:
06/03/11 20:34:35 (13 years ago)
Author:
mark
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • OpponentModels

    v125 v126  
    33
    44== Papers ==
    5 
    6 ||'''Title'''||A Framework for Building Intelligent SLA Negotiation Strategies under Time Constraints||
    7 ||'''Author(s)'''||G.C. Silaghi, L.D. Şerban and C.M. Litan||
    8 ||'''Cited'''||-||
    9 ||'''Subject(s)'''||||
    10 ||'''Summary'''||||
    11 ||'''Relevance'''||||
    12 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:Snu0uoLL6tgJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    13 ||'''Cites seen'''||Yes||
    14 [[BR]]
    15 
    16 ||'''Title'''||A Framework for Multi-agent Electronic Marketplaces: Analysis and Classification of Existing Systems ||
    17 ||'''Author(s)'''||K. Kurbel and I. Loutchko||
    18 ||'''Cited'''||25||
    19 ||'''Subject(s)'''||||
    20 ||'''Summary'''||||
    21 ||'''Relevance'''||||
    22 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:rVdFnqvBOAMJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0&ct=citation&cd=0 Link]||
    23 ||'''Cites seen'''||Yes||
    24 [[BR]]
    25  
    26 ||'''Title'''||A Machine Learning Approach to Automated Negotiation and Prospects for Electronic Commerce ||
    27 ||'''Author(s)'''||J.R. Oliver||
    28 ||'''Cited'''||198||
    29 ||'''Subject(s)'''||||
    30 ||'''Summary'''||||
    31 ||'''Relevance'''||||
    32 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:68RpIHxdsQEJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    33 ||'''Cites seen'''||Yes||
    34 [[BR]]
    35 
    36 ||'''Title'''||AgentFSEGA - Time Constrained Reasoning Model for Bilateral Multi-Issue Negotiation||
    37 ||'''Author(s)'''||L.D. Serban, G.C. Silaghi, and C.M. Litan||
    38 ||'''Cited'''||-||
    39 ||'''Subject(s)'''||Learning issue utility curves by Bayesian learning; Learning issue ordering||
    40 ||'''Summary'''||The opponent model of FSEGA tries to approximate the ordering of the issues and the utility function of each issue by using Bayesian[[BR]] learning. Each value in an issue is imagined as approximating one of the three basic functions (downhill, uphill, triangular). Using[[BR]] the Bayesian formula, each hypothesis for a value is updated. Finally the hypothesis are combined based on their likelihood[[BR]] to determine the final form of the utility function for each value in the issue; combining these results in the utility function for[[BR]] an issue. Finally, the bidding strategy uses isocurves and the opponent model to increase acceptance.||
    41 ||'''Relevance'''||8||
    42 ||'''Bibtex'''||X||
    43 ||'''Cites seen'''||Yes||
    44 [[BR]]
    45 
    46 ||'''Title'''||Agents that Acquire Negotiation Strategies Using a Game Theoretic Learning Theory||
    47 ||'''Author(s)'''||N. Eiji Nawa||
    48 ||'''Cited'''||2||
    49 ||'''Subject(s)'''||||
    50 ||'''Summary'''||||
    51 ||'''Relevance'''||||
    52 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:wGEmownS05MJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    53 ||'''Cites seen'''||Yes||
    54 [[BR]]
    55 
    56 ||'''Title'''||An Adaptive Bilateral Negotiation Model for E-Commerce Settings||
    57 ||'''Author(s)'''||V. Narayanan and N.R. Jennings||
    58 ||'''Cited'''||26||
    59 ||'''Subject(s)'''||||
    60 ||'''Summary'''||||
    61 ||'''Relevance'''||||
    62 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:-2t4LW-LK3cJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    63 ||'''Cites seen'''||Yes||
    64 [[BR]]
    65 
    66 ||'''Title'''||An Adaptive Learning Method in Automated Negotiation Based on Artificial Neural Network||
    67 ||'''Author(s)'''||Z. Zeng, B. Meng, Y. Zeng||
    68 ||'''Cited'''||4||
    69 ||'''Subject(s)'''||||
    70 ||'''Summary'''||||
    71 ||'''Relevance'''||||
    72 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:SaHRG-BD0RAJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    73 ||'''Cites seen'''||Yes||
    74 [[BR]]
    75 
    76 ||'''Title'''||An Architecture for Negotiating Agents that Learn||
    77 ||'''Author(s)'''||H.H. Bui, S. Venkatesh, and D. Kieronska||
    78 ||'''Cited'''||2||
    79 ||'''Subject(s)'''||||
    80 ||'''Summary'''||||
    81 ||'''Relevance'''||||
    82 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:1dvCOIJaG9cJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    83 ||'''Cites seen'''||Yes||
    84 [[BR]]
    85 
    86 ||'''Title'''||An Automated Agent for Bilateral Negotiation with Bounded Rational Agents with Incomplete Information||
    87 ||'''Author(s)'''||R. Lin, S. Kraus, J. Wilkenfeld, J. Barry||
    88 ||'''Cited'''||23||
    89 ||'''Subject(s)'''||||
    90 ||'''Summary'''||||
    91 ||'''Relevance'''||||
    92 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:vhcnBvl6XnMJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=00 Link]||
    93 ||'''Cites seen'''||Yes||
    94 [[BR]]
    95 
    96 ||'''Title'''||An Evolutionairy Learning Approach for Adaptive Negotiation Agents||
    97 ||'''Author(s)'''||R.Y.K. Lau, M. Tang, O. Wong, S.W. Milliner||
    98 ||'''Cited'''||19||
    99 ||'''Subject(s)'''||||
    100 ||'''Summary'''||||
    101 ||'''Relevance'''||||
    102 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:WQAKMsZjXk8J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    103 ||'''Cites seen'''||Yes||
    104 [[BR]]
    105 
    106 ||'''Title'''||Analysis of Negotiation Dynamics||
    107 ||'''Author(s)'''||K. Hindriks, C.M. Jonker, D. Tykhonov||
    108 ||'''Cited'''||5||
    109 ||'''Subject(s)'''||Strategy evaluation||
    110 ||'''Summary'''||The process of the negotiation determines the outcome. This work presents an outline of a formal toolbox to analyze [[BR]]the dynamics of negotiation based on an analysis of move types. Boulware is a hard bargaining tactic, whereas [[BR]]conceder is soft. Besides evaluating the outcome one should also analyze the dance. This can be done by [[BR]]classifying the moves (nice, selfish, etc). A trace is a list of bids. The percentage of a type of move can be [[BR]]calculated. The sensitivy for opponent moves is based on this measure.  ||
    111 ||'''Relevance'''||7, interesting technique for evaluating strategies||
    112 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:p9dR-WdVTQAJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    113 ||'''Cites seen'''||Yes||
    114 ||'''Processed'''||Yes||
    115 [[BR]]
    116 
    117 ||'''Title'''||Anticipating Agent's Negotiation Strategies in an E-marketplace Using Belief Models||
    118 ||'''Author(s)'''||F. Teuteberg, K. Kurbel||
    119 ||'''Cited'''||11||
    120 ||'''Subject(s)'''||||
    121 ||'''Summary'''||||
    122 ||'''Relevance'''||||
    123 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:qauKvN1Swx8J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    124 ||'''Cites seen'''||Yes||
    125 [[BR]]
    126 
    127 ||'''Title'''||Automated Multi-Attribute Negotiation with Efficient Use of Incomplete Preference Information||
    128 ||'''Author(s)'''||C. Jonker and V. Robu||
    129 ||'''Cited'''||44||
    130 ||'''Subject(s)'''||Mechanism for taking learning and initial information into account in a standard bilateral negotiation model||
    131 ||'''Summary'''||The classic technique for negotiation with undisclosed preferences is to use a mediator, however[[BR]] can we be sure that he is impartial? The negotiation strategy discussed is for billeteral multi-issue[[BR]] negotiation. A decreasing util curve is considered. A bid is calculated to fit the current [[BR]]util. Each issue has a seperate parameter such that more or less concession can be made on [[BR]]certain issues. General tolerance determines the general speed of concession. For each issue[[BR]] for the opponent bid and new calculated bid it is considered how much concession is made [[BR]]towards the opponent bid based on the configuration tolerance for each issue. This full [[BR]]formula depends on the weights of the opponent, which have to be estimated. The weights for each [[BR]]attribute can be estimated by comparing the distance between attributes for an issue in [[BR]]sequential bids and using this distance to mark the importance of an attribute. This last [[BR]]step is domain dependent. Concluding, the technique works, but requires tuning for the domain[[BR]] and assumes that the other agent plays a more or less similiar concession based technique. ||
    132 ||'''Relevance'''||4, domain dependent opponent modelling approach for learning ordering of attributes||
    133 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:fSLXt9dFf4kJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    134 ||'''Cites seen'''||Yes||
    135 ||'''Processed'''||Yes||
    136 [[BR]]
    137 
    138 ||'''Title'''||Bayesian Learning in Bilateral Multi-issue Negotiation and its Application in MAS-based Electronic Commerce||
    139 ||'''Author(s)'''||J. Li, Y. Cao||
    140 ||'''Cited'''||6||
    141 ||'''Subject(s)'''||||
    142 ||'''Summary'''||||
    143 ||'''Relevance'''||||
    144 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:QuJqFn4TJaAJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    145 ||'''Cites seen'''||Yes||
    146 [[BR]]
    147 
    148 ||'''Title'''||Bayesian Learning in Negotiation||
    149 ||'''Author(s)'''||D. Zeng, K. Sycara||
    150 ||'''Cited'''||355||
    151 ||'''Subject(s)'''||||
    152 ||'''Summary'''||||
    153 ||'''Relevance'''||||
    154 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:OcrAgrlmKdgJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    155 ||'''Cites seen'''||Yes||
    156 [[BR]]
    157 
    158 ||'''Title'''||Benefits of Learning in Negotiation||
    159 ||'''Author(s)'''||D. Zeng, K. Sycara||
    160 ||'''Cited'''||116||
    161 ||'''Subject(s)'''||Benefits of learning, Bayesian learning, reservation values||
    162 ||'''Summary'''|| Growing interest in e-commerce motivates research in automated negotiation. Building intelligent negotiation agents is still[[br]] emerging. In contrast to most negotiation models, sequential decision model allows for learning. Learning can help understand[[br]] human behaviour, but can also result in better results for the learning party. Bayesian learning of reservation[[br]] values can be used to determine the zone of agreement for an issue based on the domain knowledge and bidding interactions.[[br]] Concluding for one-issue, learning positively influences bargaining quality, number of exchanged proposals,[[br]] and leads to a better compromise if both learn. Learning works always works better in the proposed case.||
    163 ||'''Relevance'''||8. Strong example of Bayesian learning||
    164 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:omTF-8TbGE4J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    165 ||'''Cites seen'''||Yes||
    166 ||'''Processed'''||Yes||
    167 [[BR]]
    168 
    169 ||'''Title'''||Bilateral Negotiation with Incomplete and Uncertain Information: A Decision-Theoretic Approach Using a Model of the Opponent||
    170 ||'''Author(s)'''||C. Mudgal, J. Vassileva||
    171 ||'''Cited'''||42||
    172 ||'''Subject(s)'''||||
    173 ||'''Summary'''||||
    174 ||'''Relevance'''||||
    175 ||'''Bibtex'''||||
    176 ||'''Cites seen'''||Yes||
    177 [[BR]]
    178 
    179 ||'''Title'''||Building Automated Negotiation Strategies Enhanced by MLP and GR Neural Networks for Opponent Agent Behaviour Prognosis||
    180 ||'''Author(s)'''||I. Roussaki, I. Papaioannou, and M. Anagostou||
    181 ||'''Cited'''||3||
    182 ||'''Subject(s)'''||||
    183 ||'''Summary'''||||
    184 ||'''Relevance'''||||
    185 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:18qiNH2UInwJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    186 ||'''Cites seen'''||Yes||
    187 [[BR]]
    188 
    189 ||'''Title'''||Comparing Equilibria for Game-Theoretic and Evolutionary Bargaining Models||
    190 ||'''Author(s)'''||S. Fatima, M. Wooldridge, N.R. Jennings||
    191 ||'''Cited'''||21||
    192 ||'''Subject(s)'''||||
    193 ||'''Summary'''||||
    194 ||'''Relevance'''||||
    195 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:aLP4CeRMh68J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    196 ||'''Cites seen'''||Yes||
    197 [[BR]]
    198 
    199 ||'''Title'''||Compromising Strategy based on Estimated Maximum Utility for Automated Negotiating Agents||
    200 ||'''Author(s)'''||S. Kawaguchi, K. Fujita, T. Ito||
    201 ||'''Cited'''||-||
    202 ||'''Subject(s)'''||||
    203 ||'''Summary'''||||
    204 ||'''Relevance'''||||
    205 ||'''Bibtex'''||X||
    206 ||'''Cites seen'''||Yes||
    207 [[BR]]
    208 
    209 ||'''Title'''||Determining Succesful Negotiation Strategies: An Evolutionary Approach||
    210 ||'''Author(s)'''||N. Matos, C. Sierra, N.R. Jennings||
    211 ||'''Cited'''||149||
    212 ||'''Subject(s)'''||Analysing strengths and weakness of tactics||
    213 ||'''Summary'''||This work uses an evolutionary approach to find on how agents using particular negotiation strategies fare against each[[BR]] other in a negotiation. A standard bileteral multi-lateral negotiation model is used. The issues are continious[[BR]] between a given range. Three types of techniques are considered: time-dependent, resource-[[BR]]dependent, and behaviour dependent. The results give an nice overview on which tactic is effective against which [[BR]]opponent. This is very interesting if the opponent is somehow able to determine the type of agent.||
    214 ||'''Relevance'''||8, motivation for learning of strategies||
    215 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:jAWqPD9IQ-sJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    216 ||'''Cites seen'''||Yes||
    217 ||'''Processed'''||Yes||
    218 [[BR]]
    219 
    220 ||'''Title'''||Facing the Challenge of Human-Agent Negotiations via Effective General Opponent Modeling||
    221 ||'''Author(s)'''||Y. Oshrat, R. Lin, S. Kraus||
    222 ||'''Cited'''||19||
    223 ||'''Subject(s)'''||||
    224 ||'''Summary'''||||
    225 ||'''Relevance'''||||
    226 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:D7XLjMbCgQkJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    227 ||'''Cites seen'''||Yes||
    228 [[BR]]
    229 
    230 ||'''Title'''||Genetic Algorithms for Automated Negotiations: A FSM-Based Application Approach||
    231 ||'''Author(s)'''||M.T. Tu, E. Wolff, W. Lamersdorf||
    232 ||'''Cited'''||37||
    233 ||'''Subject(s)'''||||
    234 ||'''Summary'''||||
    235 ||'''Relevance'''||||
    236 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:kO_zImqufQMJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    237 ||'''Cites seen'''||Yes||
    238 [[BR]]
    239 
    240 ||'''Title'''||IAMhaggler: A Negotiation Agent for Complex Environments||
    241 ||'''Author(s)'''||C.R. Williams, V. Robu, E.H. Gerding, and N.R. Jennings||
    242 ||'''Cited'''||-||
    243 ||'''Subject(s)'''||ANAC, Pareto search, Bayes' rule||
    244 ||'''Summary'''||IAMhaggler first determines the discount factor of the opponent by using non-linear regression. Next, the found curve[[BR]] is discounted to find the opponent bid curve. Next, the maximum is found on the opponent curve, and an [[BR]] appropriate curve is plotted for the own utility curve. For domains without unordered issues Pareto-search is [[BR]] used to determine all possible bids matching an utility. Next, it is determined which bid is the closest to the best[[BR]]  received opponent bid by using the euclidean distance. For domains with unordered issues, each [[BR]] unorderded value is varied, after which the possible bids are determined which satisfy the utility. Finally, using Bayes' [[BR]] rule for opponent modelling, the best possible bid for the opponent is chosen. ||
    245 ||'''Relevance'''||8, beautifull strategy||
    246 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:dXychgQCiFMJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    247 ||'''Cites seen'''||Yes||
    248 [[BR]]
    249 
    250 ||'''Title'''||Inferring implicit preferences from negotiation actions||
    251 ||'''Author(s)'''||A. Restificar and P. Haddawy||
    252 ||'''Cited'''||10||
    253 ||'''Subject(s)'''||||
    254 ||'''Summary'''||||
    255 ||'''Relevance'''||||
    256 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:m6_7yTkcPBwJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    257 ||'''Cites seen'''||Yes||
    258 [[BR]]
    259 
    260 ||'''Title'''||Integration of Learning, Situational Power and Goal Constraints Into Time-Dependent Electronic Negotiation Agents||
    261 ||'''Author(s)'''||W.W.H. Mok||
    262 ||'''Cited'''||-||
    263 ||'''Subject(s)'''||||
    264 ||'''Summary'''||||
    265 ||'''Relevance'''||||
    266 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:58qYQ6xl6vgJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=6 Link]||
    267 ||'''Cites seen'''||Yes||
    268 [[BR]]
    269 
    270 ||'''Title'''||Learning Algorithms for Single-instance Electronic Negotiations using the Time-dependent Behavioral Tactic||
    271 ||'''Author(s)'''||W.W.H Mok and R.P. Sundarraj||
    272 ||'''Cited'''||17||
    273 ||'''Subject(s)'''||||
    274 ||'''Summary'''||||
    275 ||'''Relevance'''||||
    276 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:9mOSw0JyumEJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    277 ||'''Cites seen'''||Yes||
    278 [[BR]]
    279 
    280 ||'''Title'''||Learning an Agent's Utility Function by Observing Behavior||
    281 ||'''Author(s)'''||U. Chajewska, D. Koller, D. Ormoneit||
    282 ||'''Cited'''||54||
    283 ||'''Subject(s)'''||||
    284 ||'''Summary'''||||
    285 ||'''Relevance'''||||
    286 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:JXqC1SLmlPIJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    287 ||'''Cites seen'''||Yes||
    288 [[BR]]
    289 
    290 
    291 ||'''Title'''||Learning an Opponent's Preferences to Make Effective Multi-Issue Negotiation Trade-Offs||
    292 ||'''Author(s)'''||R.M. Coehoorn, N.R. Jennings||
    293 ||'''Cited'''||78||
    294 ||'''Subject(s)'''||KDE Learning, Negotiation model, Concession based strategy||
    295 ||'''Summary'''|| Effective and efficient multi-issue negotiation requires an agent to have some indication of it's opponent's preferences [[br]]over the issues in the domain. Kernel Density Estimation (KDE) is used to estimate the weight attached to different issues [[br]]by different agents. It is assumed that if the value of an issue increases, that this is positive for one agent, and negative [[br]]for the other. No assumptions about relation between time, negotiation history and issue-weight are required, in contrast [[br]]to Bayesian learning. The difference between concessive (counter)offers is used to estimate the weights of the issues [[br]] (assumption: stronger concessions are made later on in the negotiation). Faratin's hill climbing algorithm augmented with KDE is [[br]]used to propose the next bid. KDE proved succesful on the used negotiation model. Future works entails testing the approach [[br]]against different opponent strategies and extending the approach to other negotiation models (see assumption in summary). ||
    296 ||'''Relevance'''||9. KDE learning described in detail. Strong related work section||
    297 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:Z79P04-IRS0J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    298 ||'''Cites seen'''||Yes||
    299 [[BR]]
    300 
    301 ||'''Title'''||Learning Opponents' Preferences in Multi-Object Automated Negotiation||
    302 ||'''Author(s)'''||S. Buffett and B. Spencer||
    303 ||'''Cited'''||18||
    304 ||'''Subject(s)'''||||
    305 ||'''Summary'''||||
    306 ||'''Relevance'''||||
    307 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:3k4MYX9X6BcJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    308 ||'''Cites seen'''||Yes||
    309 [[BR]]
    310 
    311 ||'''Title'''||Learning other Agents' Preferences in Multiagent Negotiation using the Bayesian Classifier.||
    312 ||'''Author(s)'''||H.H. Bui, D. Kieronska, S. Venkatesh||
    313 ||'''Cited'''||29||
    314 ||'''Subject(s)'''||||
    315 ||'''Summary'''||||
    316 ||'''Relevance'''||||
    317 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:zJTcBPpxfYoJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    318 ||'''Cites seen'''||Yes||
    319 [[BR]]
    320 
    321 ||'''Title'''||Learning to Select Negotiation Strategies in Multi-Agent Meeting Scheduling||
    322 ||'''Author(s)'''||E. Crawford and M. Veleso||
    323 ||'''Cited'''||21||
    324 ||'''Subject(s)'''||||
    325 ||'''Summary'''||||
    326 ||'''Relevance'''||||
    327 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:PfWgZlbnox8J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    328 ||'''Cites seen'''||Yes||
    329 [[BR]]
    330 
    331 ||'''Title'''||Modelling Agents Behaviour in Automated Negotiation||
    332 ||'''Author(s)'''||C. Hou||
    333 ||'''Cited'''||10||
    334 ||'''Subject(s)'''||||
    335 ||'''Summary'''||||
    336 ||'''Relevance'''||||
    337 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:OznI0-O4SlgJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    338 ||'''Cites seen'''||Yes||
    339 [[BR]]
    340 
    341 ||'''Title'''||Modeling Opponent Decision in Repeated One-shot Negotiations||
    342 ||'''Author(s)'''||S.Saha, A. Biswas, S. Sen||
    343 ||'''Cited'''||26||
    344 ||'''Subject(s)'''||||
    345 ||'''Summary'''||||
    346 ||'''Relevance'''||||
    347 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:HWyIIq6nlNUJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    348 ||'''Cites seen'''||Yes||
    349 [[BR]]
    350 
    351 ||'''Title'''||Negotiating agents that learn about others' preferences||
    352 ||'''Author(s)'''||H.H. Bui, D. Kieronska and S. Venkatesh||
    353 ||'''Cited'''||5||
    354 ||'''Subject(s)'''||Logic-like representation negotiation model, bin-based opponent model, one-issue continious||
    355 ||'''Summary'''||The used method assumes that domain knowledge is available to partition the search space. Each turns, the agents communicate [[br]]the space where an agreement is possible. Each turn there is a negotiation between all agents to find a common space, which [[br]]means that the agent recommunicate a refined space of agreement until an agreement is reached. The proces continues until [[br]]a common decision is found (a decision is an element in the space of agreement). A learning algorithm can be used as follows: [[br]]first the full domain space is split into zones, which are allocated a uniform chance. This chance is updated for each region [[br]]for each agent based on the received space of agreement. When agents do not agree about the space, then a the space is chosen [[br]]which has the maximum support based on the chances of each space for each agent. This leads to a higher chance of agreement.
    356 ||
    357 ||'''Relevance'''||3, domain knowledge required and only considers one issue||
    358 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:8EOwrOyBdv0J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    359 ||'''Cites seen'''||Yes||
    360 [[BR]]
    361 
    362 ||'''Title'''||Negotiation Decision Functions for Autonomous Agent||
    363 ||'''Author(s)'''||P. Faratin, C. Sierra, N.R. Jennings||
    364 ||'''Cited'''||718||
    365 ||'''Subject(s)'''||||
    366 ||'''Summary'''||||
    367 ||'''Relevance'''||||
    368 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:Pmj4ztkTFq4J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    369 ||'''Cites seen'''||Yes||
    370 [[BR]]
    371 
    372 ||'''Title'''||Negotiation Dynamics: Analysis, Concession Tactics, and Outcomes||
    373 ||'''Author(s)'''||K. Hindriks, C.M. Jonker, D. Tykhonov||
    374 ||'''Cited'''||7||
    375 ||'''Subject(s)'''||||
    376 ||'''Summary'''||||
    377 ||'''Relevance'''||||
    378 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:8lUoyWRsIMMJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    379 ||'''Cites seen'''||Yes||
    380 [[BR]]
    381 
    382 ||'''Title'''||On-Line Incremental Learning in Bilateral Multi-Issue Negotiation||
    383 ||'''Author(s)'''||V. Soo, C. Hung||
    384 ||'''Cited'''||18||
    385 ||'''Subject(s)'''||Online incremental learning using neural networks||
    386 ||'''Summary'''||This paper discusses using neural networks for learning the opponent model, however it is not described how, and the[[BR]] results are not promosing. By limiting the amount exchanges, opponent models become more important, and [[BR]]lead to beter outcomes.
    387 ||
    388 ||'''Relevance'''||2, since the paper is not specific enough||
    389 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:G3nExE5HdskJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    390 ||'''Cites seen'''||Yes||
    391 [[BR]]
    392 
    393 ||'''Title'''||On Learning Negotiation Strategies by Artificial Adaptive Agents in Environments of Incomplete Information||
    394 ||'''Author(s)'''||J.R. Oliver||
    395 ||'''Cited'''||6||
    396 ||'''Subject(s)'''||||
    397 ||'''Summary'''||||
    398 ||'''Relevance'''||||
    399 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:zk0BU_aG2ZIJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    400 ||'''Cites seen'''||Yes||
    401 [[BR]]
    402 
    403 ||'''Title'''||Opponent Model Estimation in Bilateral Multi-issue Negotiation||
    404 ||'''Author(s)'''||N. van Galen Last||
    405 ||'''Cited'''||-||
    406 ||'''Subject(s)'''||Agent which participated in ANAC2010||
    407 ||'''Summary'''||Overall not interesting, but encouraged me to find fields involved in negotiation.||
    408 ||'''Relevance'''||2||
    409 ||'''Bibtex'''||X||
    410 ||'''Cites seen'''||Yes||
    411 ||'''Processed'''||Yes||
    412 [[BR]]
    413 
    414 ||'''Title'''||Opponent Modelling in Automated Multi-Issue Negotiation Using Bayesian Learning||
    415 ||'''Author(s)'''||K. Hindriks, D. Tykhonov||
    416 ||'''Cited'''||33||
    417 ||'''Subject(s)'''||||
    418 ||'''Summary'''||||
    419 ||'''Relevance'''||||
    420 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:BtssqMir4RcJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    421 ||'''Cites seen'''||Yes||
    422 [[BR]]
    423 
    424 ||'''Title'''||Optimal negotiation strategies for agents with incomplete information||
    425 ||'''Author(s)'''||S.S. Fatima, M. Wooldridge and N.R. Jennings||
    426 ||'''Cited'''||88||
    427 ||'''Subject(s)'''||||
    428 ||'''Summary'''||||
    429 ||'''Relevance'''||||
    430 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:zfXBf6ObIaEJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    431 ||'''Cites seen'''||Yes||
    432 [[BR]]
    433 
    434 ||'''Title'''||Predicting Agents Tactics in Automated Negotiation||
    435 ||'''Author(s)'''||C. Hou||
    436 ||'''Cited'''||12||
    437 ||'''Subject(s)'''||||
    438 ||'''Summary'''||||
    439 ||'''Relevance'''||||
    440 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:4CP7PDlOJL8J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    441 ||'''Cites seen'''||Yes||
    442 [[BR]]
    443 
    444 ||'''Title'''||Predicting partner's behaviour in agent negotiation||
    445 ||'''Author(s)'''||J. Brzostowski, R. Kowalczyk||
    446 ||'''Cited'''||16||
    447 ||'''Subject(s)'''||||
    448 ||'''Summary'''||||
    449 ||'''Relevance'''||||
    450 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:3MifmJepz_4J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    451 ||'''Cites seen'''||Yes||
    452 [[BR]]
    453 
    454 
    455 ||'''Title'''||The Benefits of Opponent Models in Negotiation||
    456 ||'''Author(s)'''||K. Hindriks, C.M. Jonker, D. Tykhonov||
    457 ||'''Cited'''||-||
    458 ||'''Subject(s)'''||Nice Mirroring Strategy using Bayesian Learning||
    459 ||'''Summary'''||Opponent models can aid in preventing exploitation, by determining the type of move of the opponent (selfish, [[BR]] unfortunate, concession, nice, fortunate, silent), and by taking the opponents preferences into account to[[BR]] increase the chance of acceptation. The mirror strategy mirrors the behaviours of the opponent, based on a [[BR]]classification of the opponent move. Nice MS does the same, but adds a nice move, which is a move which only [[BR]]increases the opponents utility without decreasing ours. Overall the strategy is shown to be effective by [[BR]]comparing the result of first testing the strategy against a random agent, and then the other agents. Also, the[[BR]] distance to a Kalai-Smorodinsky solution and the distance to the Nash Point is used as a metric. For [[BR]]future work the exploitability of MS should be researched.||
    460 ||'''Relevance'''||8, interesting application of opponent modelling ||
    461 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:BtssqMir4RcJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=2 Link]||
    462 ||'''Cites seen'''||Yes||
    463 ||'''Processed'''||Yes||
    464 [[BR]]
    465 
    466 ||'''Title'''||The First Automated Negotiating Agents Competition (ANAC 2010)||
    467 ||'''Author(s)'''||T. Baarslag, K. Hindriks, C. Jonker, S. Kraus, R. Lin||
    468 ||'''Cited'''||-||
    469 ||'''Subject(s)'''||ANAC, overview multiple agents, opponent models, acceptance conditions||
    470 ||'''Summary'''||The ANAC competition models bilateral multi-issue closed negotiations and provides a benchmark for negotiation agents. [[br]]Opponent models can also be used to identify the type of strategy of the opponent. Interesting agents for further analysis [[br]]are: IAM(crazy)Haggler, FSEGA (profile learning), and Agent Smith. Issues can be predicatable, which means that they [[br]]have a logical order, or unpredicatable, such as colors. This paper also includes acceptance conditions.||
    471 ||'''Relevance'''||5, too global, however interesting citations||
    472 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:vKSG_Lm38D0J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=3 Link]||
    473 ||'''Cites seen'''||Yes||
    474 [[BR]]
    475 
    476 ||'''Title'''||Towards a Quality Assessment Method for Learning Preference Profiles in Negotiation||
    477 ||'''Author(s)'''||K.V. Hindriks and D. Tykhonov||
    478 ||'''Cited'''||6||
    479 ||'''Subject(s)'''||Measures for quality of opponent model||
    480 ||'''Summary'''||See section on quality measures in paper||
    481 ||'''Relevance'''||9||
    482 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:oxFpfvvuE94J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    483 ||'''Cites seen'''||Yes||
    484 ||'''Processed'''||Yes||
    485 [[BR]]
    486 
    487 ||'''Title'''||Using Similarity Criteria to Make Issue Trade-offs in Automated Negotiations||
    488 ||'''Author(s)'''||P. Faratin, C. Sierra, N.R. Jennings||
    489 ||'''Cited'''||367||
    490 ||'''Subject(s)'''||||
    491 ||'''Summary'''||||
    492 ||'''Relevance'''||||
    493 ||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:xXqv_X0tP9MJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
    494 ||'''Cites seen'''||Yes||
    495 [[BR]]
    496 
    497 ||'''Title'''||Yushu: a Heuristic-Based Agent for Automated Negotiating Competition||
    498 ||'''Author(s)'''||B. An and V. Lesser||
    499 ||'''Cited'''||-||
    500 ||'''Subject(s)'''||ANAC agent, Complexity learning||
    501 ||'''Summary'''||One of the interesting strategies of Yushu is that it tries to measure the competitiveness, which influences it's bidding strategy.[[BR]] Details can not be found in their paper, however this is not relevant for this survey. They also measures time by averaging [[BR]] over all bids. This is used to determine when to accept in panic.||
    502 ||'''Relevance'''||4, only finding the amount of rounds is interesting, but obvious||
    503 ||'''Bibtex'''||X||
    504 ||'''Cites seen'''||Yes||
    505 ||'''Processed'''||Yes||
    506 [[BR]]
     5[http://mmi.tudelft.nl/trac/negotiation/wiki/Papers Papers]
    5076
    5087== Meetings ==