Changes between Initial Version and Version 1 of Papers


Ignore:
Timestamp:
06/03/11 20:36:42 (13 years ago)
Author:
mark
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • Papers

    v1 v1  
     1== Papers ==
     2||'''Title'''||A Framework for Building Intelligent SLA Negotiation Strategies under Time Constraints||
     3||'''Author(s)'''||G.C. Silaghi, L.D. Şerban and C.M. Litan||
     4||'''Cited'''||-||
     5||'''Subject(s)'''||||
     6||'''Summary'''||||
     7||'''Relevance'''||||
     8||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:Snu0uoLL6tgJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     9||'''Cites seen'''||Yes||
     10[[BR]]
     11
     12||'''Title'''||A Framework for Multi-agent Electronic Marketplaces: Analysis and Classification of Existing Systems ||
     13||'''Author(s)'''||K. Kurbel and I. Loutchko||
     14||'''Cited'''||25||
     15||'''Subject(s)'''||||
     16||'''Summary'''||||
     17||'''Relevance'''||||
     18||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:rVdFnqvBOAMJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0&ct=citation&cd=0 Link]||
     19||'''Cites seen'''||Yes||
     20[[BR]]
     21 
     22||'''Title'''||A Machine Learning Approach to Automated Negotiation and Prospects for Electronic Commerce ||
     23||'''Author(s)'''||J.R. Oliver||
     24||'''Cited'''||198||
     25||'''Subject(s)'''||||
     26||'''Summary'''||||
     27||'''Relevance'''||||
     28||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:68RpIHxdsQEJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     29||'''Cites seen'''||Yes||
     30[[BR]]
     31
     32||'''Title'''||AgentFSEGA - Time Constrained Reasoning Model for Bilateral Multi-Issue Negotiation||
     33||'''Author(s)'''||L.D. Serban, G.C. Silaghi, and C.M. Litan||
     34||'''Cited'''||-||
     35||'''Subject(s)'''||Learning issue utility curves by Bayesian learning; Learning issue ordering||
     36||'''Summary'''||The opponent model of FSEGA tries to approximate the ordering of the issues and the utility function of each issue by using Bayesian[[BR]] learning. Each value in an issue is imagined as approximating one of the three basic functions (downhill, uphill, triangular). Using[[BR]] the Bayesian formula, each hypothesis for a value is updated. Finally the hypothesis are combined based on their likelihood[[BR]] to determine the final form of the utility function for each value in the issue; combining these results in the utility function for[[BR]] an issue. Finally, the bidding strategy uses isocurves and the opponent model to increase acceptance.||
     37||'''Relevance'''||8||
     38||'''Bibtex'''||X||
     39||'''Cites seen'''||Yes||
     40[[BR]]
     41
     42||'''Title'''||Agents that Acquire Negotiation Strategies Using a Game Theoretic Learning Theory||
     43||'''Author(s)'''||N. Eiji Nawa||
     44||'''Cited'''||2||
     45||'''Subject(s)'''||||
     46||'''Summary'''||||
     47||'''Relevance'''||||
     48||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:wGEmownS05MJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     49||'''Cites seen'''||Yes||
     50[[BR]]
     51
     52||'''Title'''||An Adaptive Bilateral Negotiation Model for E-Commerce Settings||
     53||'''Author(s)'''||V. Narayanan and N.R. Jennings||
     54||'''Cited'''||26||
     55||'''Subject(s)'''||||
     56||'''Summary'''||||
     57||'''Relevance'''||||
     58||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:-2t4LW-LK3cJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     59||'''Cites seen'''||Yes||
     60[[BR]]
     61
     62||'''Title'''||An Adaptive Learning Method in Automated Negotiation Based on Artificial Neural Network||
     63||'''Author(s)'''||Z. Zeng, B. Meng, Y. Zeng||
     64||'''Cited'''||4||
     65||'''Subject(s)'''||||
     66||'''Summary'''||||
     67||'''Relevance'''||||
     68||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:SaHRG-BD0RAJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     69||'''Cites seen'''||Yes||
     70[[BR]]
     71
     72||'''Title'''||An Architecture for Negotiating Agents that Learn||
     73||'''Author(s)'''||H.H. Bui, S. Venkatesh, and D. Kieronska||
     74||'''Cited'''||2||
     75||'''Subject(s)'''||||
     76||'''Summary'''||||
     77||'''Relevance'''||||
     78||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:1dvCOIJaG9cJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     79||'''Cites seen'''||Yes||
     80[[BR]]
     81
     82||'''Title'''||An Automated Agent for Bilateral Negotiation with Bounded Rational Agents with Incomplete Information||
     83||'''Author(s)'''||R. Lin, S. Kraus, J. Wilkenfeld, J. Barry||
     84||'''Cited'''||23||
     85||'''Subject(s)'''||||
     86||'''Summary'''||||
     87||'''Relevance'''||||
     88||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:vhcnBvl6XnMJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=00 Link]||
     89||'''Cites seen'''||Yes||
     90[[BR]]
     91
     92||'''Title'''||An Evolutionairy Learning Approach for Adaptive Negotiation Agents||
     93||'''Author(s)'''||R.Y.K. Lau, M. Tang, O. Wong, S.W. Milliner||
     94||'''Cited'''||19||
     95||'''Subject(s)'''||||
     96||'''Summary'''||||
     97||'''Relevance'''||||
     98||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:WQAKMsZjXk8J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     99||'''Cites seen'''||Yes||
     100[[BR]]
     101
     102||'''Title'''||Analysis of Negotiation Dynamics||
     103||'''Author(s)'''||K. Hindriks, C.M. Jonker, D. Tykhonov||
     104||'''Cited'''||5||
     105||'''Subject(s)'''||Strategy evaluation||
     106||'''Summary'''||The process of the negotiation determines the outcome. This work presents an outline of a formal toolbox to analyze [[BR]]the dynamics of negotiation based on an analysis of move types. Boulware is a hard bargaining tactic, whereas [[BR]]conceder is soft. Besides evaluating the outcome one should also analyze the dance. This can be done by [[BR]]classifying the moves (nice, selfish, etc). A trace is a list of bids. The percentage of a type of move can be [[BR]]calculated. The sensitivy for opponent moves is based on this measure.  ||
     107||'''Relevance'''||7, interesting technique for evaluating strategies||
     108||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:p9dR-WdVTQAJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     109||'''Cites seen'''||Yes||
     110||'''Processed'''||Yes||
     111[[BR]]
     112
     113||'''Title'''||Anticipating Agent's Negotiation Strategies in an E-marketplace Using Belief Models||
     114||'''Author(s)'''||F. Teuteberg, K. Kurbel||
     115||'''Cited'''||11||
     116||'''Subject(s)'''||||
     117||'''Summary'''||||
     118||'''Relevance'''||||
     119||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:qauKvN1Swx8J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     120||'''Cites seen'''||Yes||
     121[[BR]]
     122
     123||'''Title'''||Automated Multi-Attribute Negotiation with Efficient Use of Incomplete Preference Information||
     124||'''Author(s)'''||C. Jonker and V. Robu||
     125||'''Cited'''||44||
     126||'''Subject(s)'''||Mechanism for taking learning and initial information into account in a standard bilateral negotiation model||
     127||'''Summary'''||The classic technique for negotiation with undisclosed preferences is to use a mediator, however[[BR]] can we be sure that he is impartial? The negotiation strategy discussed is for billeteral multi-issue[[BR]] negotiation. A decreasing util curve is considered. A bid is calculated to fit the current [[BR]]util. Each issue has a seperate parameter such that more or less concession can be made on [[BR]]certain issues. General tolerance determines the general speed of concession. For each issue[[BR]] for the opponent bid and new calculated bid it is considered how much concession is made [[BR]]towards the opponent bid based on the configuration tolerance for each issue. This full [[BR]]formula depends on the weights of the opponent, which have to be estimated. The weights for each [[BR]]attribute can be estimated by comparing the distance between attributes for an issue in [[BR]]sequential bids and using this distance to mark the importance of an attribute. This last [[BR]]step is domain dependent. Concluding, the technique works, but requires tuning for the domain[[BR]] and assumes that the other agent plays a more or less similiar concession based technique. ||
     128||'''Relevance'''||4, domain dependent opponent modelling approach for learning ordering of attributes||
     129||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:fSLXt9dFf4kJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     130||'''Cites seen'''||Yes||
     131||'''Processed'''||Yes||
     132[[BR]]
     133
     134||'''Title'''||Bayesian Learning in Bilateral Multi-issue Negotiation and its Application in MAS-based Electronic Commerce||
     135||'''Author(s)'''||J. Li, Y. Cao||
     136||'''Cited'''||6||
     137||'''Subject(s)'''||||
     138||'''Summary'''||||
     139||'''Relevance'''||||
     140||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:QuJqFn4TJaAJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     141||'''Cites seen'''||Yes||
     142[[BR]]
     143
     144||'''Title'''||Bayesian Learning in Negotiation||
     145||'''Author(s)'''||D. Zeng, K. Sycara||
     146||'''Cited'''||355||
     147||'''Subject(s)'''||||
     148||'''Summary'''||||
     149||'''Relevance'''||||
     150||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:OcrAgrlmKdgJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     151||'''Cites seen'''||Yes||
     152[[BR]]
     153
     154||'''Title'''||Benefits of Learning in Negotiation||
     155||'''Author(s)'''||D. Zeng, K. Sycara||
     156||'''Cited'''||116||
     157||'''Subject(s)'''||Benefits of learning, Bayesian learning, reservation values||
     158||'''Summary'''|| Growing interest in e-commerce motivates research in automated negotiation. Building intelligent negotiation agents is still[[br]] emerging. In contrast to most negotiation models, sequential decision model allows for learning. Learning can help understand[[br]] human behaviour, but can also result in better results for the learning party. Bayesian learning of reservation[[br]] values can be used to determine the zone of agreement for an issue based on the domain knowledge and bidding interactions.[[br]] Concluding for one-issue, learning positively influences bargaining quality, number of exchanged proposals,[[br]] and leads to a better compromise if both learn. Learning works always works better in the proposed case.||
     159||'''Relevance'''||8. Strong example of Bayesian learning||
     160||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:omTF-8TbGE4J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     161||'''Cites seen'''||Yes||
     162||'''Processed'''||Yes||
     163[[BR]]
     164
     165||'''Title'''||Bilateral Negotiation with Incomplete and Uncertain Information: A Decision-Theoretic Approach Using a Model of the Opponent||
     166||'''Author(s)'''||C. Mudgal, J. Vassileva||
     167||'''Cited'''||42||
     168||'''Subject(s)'''||||
     169||'''Summary'''||||
     170||'''Relevance'''||||
     171||'''Bibtex'''||||
     172||'''Cites seen'''||Yes||
     173[[BR]]
     174
     175||'''Title'''||Building Automated Negotiation Strategies Enhanced by MLP and GR Neural Networks for Opponent Agent Behaviour Prognosis||
     176||'''Author(s)'''||I. Roussaki, I. Papaioannou, and M. Anagostou||
     177||'''Cited'''||3||
     178||'''Subject(s)'''||||
     179||'''Summary'''||||
     180||'''Relevance'''||||
     181||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:18qiNH2UInwJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     182||'''Cites seen'''||Yes||
     183[[BR]]
     184
     185||'''Title'''||Comparing Equilibria for Game-Theoretic and Evolutionary Bargaining Models||
     186||'''Author(s)'''||S. Fatima, M. Wooldridge, N.R. Jennings||
     187||'''Cited'''||21||
     188||'''Subject(s)'''||||
     189||'''Summary'''||||
     190||'''Relevance'''||||
     191||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:aLP4CeRMh68J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     192||'''Cites seen'''||Yes||
     193[[BR]]
     194
     195||'''Title'''||Compromising Strategy based on Estimated Maximum Utility for Automated Negotiating Agents||
     196||'''Author(s)'''||S. Kawaguchi, K. Fujita, T. Ito||
     197||'''Cited'''||-||
     198||'''Subject(s)'''||||
     199||'''Summary'''||||
     200||'''Relevance'''||||
     201||'''Bibtex'''||X||
     202||'''Cites seen'''||Yes||
     203[[BR]]
     204
     205||'''Title'''||Determining Succesful Negotiation Strategies: An Evolutionary Approach||
     206||'''Author(s)'''||N. Matos, C. Sierra, N.R. Jennings||
     207||'''Cited'''||149||
     208||'''Subject(s)'''||Analysing strengths and weakness of tactics||
     209||'''Summary'''||This work uses an evolutionary approach to find on how agents using particular negotiation strategies fare against each[[BR]] other in a negotiation. A standard bileteral multi-lateral negotiation model is used. The issues are continious[[BR]] between a given range. Three types of techniques are considered: time-dependent, resource-[[BR]]dependent, and behaviour dependent. The results give an nice overview on which tactic is effective against which [[BR]]opponent. This is very interesting if the opponent is somehow able to determine the type of agent.||
     210||'''Relevance'''||8, motivation for learning of strategies||
     211||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:jAWqPD9IQ-sJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     212||'''Cites seen'''||Yes||
     213||'''Processed'''||Yes||
     214[[BR]]
     215
     216||'''Title'''||Facing the Challenge of Human-Agent Negotiations via Effective General Opponent Modeling||
     217||'''Author(s)'''||Y. Oshrat, R. Lin, S. Kraus||
     218||'''Cited'''||19||
     219||'''Subject(s)'''||||
     220||'''Summary'''||||
     221||'''Relevance'''||||
     222||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:D7XLjMbCgQkJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     223||'''Cites seen'''||Yes||
     224[[BR]]
     225
     226||'''Title'''||Genetic Algorithms for Automated Negotiations: A FSM-Based Application Approach||
     227||'''Author(s)'''||M.T. Tu, E. Wolff, W. Lamersdorf||
     228||'''Cited'''||37||
     229||'''Subject(s)'''||||
     230||'''Summary'''||||
     231||'''Relevance'''||||
     232||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:kO_zImqufQMJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     233||'''Cites seen'''||Yes||
     234[[BR]]
     235
     236||'''Title'''||IAMhaggler: A Negotiation Agent for Complex Environments||
     237||'''Author(s)'''||C.R. Williams, V. Robu, E.H. Gerding, and N.R. Jennings||
     238||'''Cited'''||-||
     239||'''Subject(s)'''||ANAC, Pareto search, Bayes' rule||
     240||'''Summary'''||IAMhaggler first determines the discount factor of the opponent by using non-linear regression. Next, the found curve[[BR]] is discounted to find the opponent bid curve. Next, the maximum is found on the opponent curve, and an [[BR]] appropriate curve is plotted for the own utility curve. For domains without unordered issues Pareto-search is [[BR]] used to determine all possible bids matching an utility. Next, it is determined which bid is the closest to the best[[BR]]  received opponent bid by using the euclidean distance. For domains with unordered issues, each [[BR]] unorderded value is varied, after which the possible bids are determined which satisfy the utility. Finally, using Bayes' [[BR]] rule for opponent modelling, the best possible bid for the opponent is chosen. ||
     241||'''Relevance'''||8, beautifull strategy||
     242||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:dXychgQCiFMJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     243||'''Cites seen'''||Yes||
     244[[BR]]
     245
     246||'''Title'''||Inferring implicit preferences from negotiation actions||
     247||'''Author(s)'''||A. Restificar and P. Haddawy||
     248||'''Cited'''||10||
     249||'''Subject(s)'''||||
     250||'''Summary'''||||
     251||'''Relevance'''||||
     252||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:m6_7yTkcPBwJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     253||'''Cites seen'''||Yes||
     254[[BR]]
     255
     256||'''Title'''||Integration of Learning, Situational Power and Goal Constraints Into Time-Dependent Electronic Negotiation Agents||
     257||'''Author(s)'''||W.W.H. Mok||
     258||'''Cited'''||-||
     259||'''Subject(s)'''||||
     260||'''Summary'''||||
     261||'''Relevance'''||||
     262||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:58qYQ6xl6vgJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=6 Link]||
     263||'''Cites seen'''||Yes||
     264[[BR]]
     265
     266||'''Title'''||Learning Algorithms for Single-instance Electronic Negotiations using the Time-dependent Behavioral Tactic||
     267||'''Author(s)'''||W.W.H Mok and R.P. Sundarraj||
     268||'''Cited'''||17||
     269||'''Subject(s)'''||||
     270||'''Summary'''||||
     271||'''Relevance'''||||
     272||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:9mOSw0JyumEJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     273||'''Cites seen'''||Yes||
     274[[BR]]
     275
     276||'''Title'''||Learning an Agent's Utility Function by Observing Behavior||
     277||'''Author(s)'''||U. Chajewska, D. Koller, D. Ormoneit||
     278||'''Cited'''||54||
     279||'''Subject(s)'''||||
     280||'''Summary'''||||
     281||'''Relevance'''||||
     282||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:JXqC1SLmlPIJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     283||'''Cites seen'''||Yes||
     284[[BR]]
     285
     286
     287||'''Title'''||Learning an Opponent's Preferences to Make Effective Multi-Issue Negotiation Trade-Offs||
     288||'''Author(s)'''||R.M. Coehoorn, N.R. Jennings||
     289||'''Cited'''||78||
     290||'''Subject(s)'''||KDE Learning, Negotiation model, Concession based strategy||
     291||'''Summary'''|| Effective and efficient multi-issue negotiation requires an agent to have some indication of it's opponent's preferences [[br]]over the issues in the domain. Kernel Density Estimation (KDE) is used to estimate the weight attached to different issues [[br]]by different agents. It is assumed that if the value of an issue increases, that this is positive for one agent, and negative [[br]]for the other. No assumptions about relation between time, negotiation history and issue-weight are required, in contrast [[br]]to Bayesian learning. The difference between concessive (counter)offers is used to estimate the weights of the issues [[br]] (assumption: stronger concessions are made later on in the negotiation). Faratin's hill climbing algorithm augmented with KDE is [[br]]used to propose the next bid. KDE proved succesful on the used negotiation model. Future works entails testing the approach [[br]]against different opponent strategies and extending the approach to other negotiation models (see assumption in summary). ||
     292||'''Relevance'''||9. KDE learning described in detail. Strong related work section||
     293||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:Z79P04-IRS0J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     294||'''Cites seen'''||Yes||
     295[[BR]]
     296
     297||'''Title'''||Learning Opponents' Preferences in Multi-Object Automated Negotiation||
     298||'''Author(s)'''||S. Buffett and B. Spencer||
     299||'''Cited'''||18||
     300||'''Subject(s)'''||||
     301||'''Summary'''||||
     302||'''Relevance'''||||
     303||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:3k4MYX9X6BcJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     304||'''Cites seen'''||Yes||
     305[[BR]]
     306
     307||'''Title'''||Learning other Agents' Preferences in Multiagent Negotiation using the Bayesian Classifier.||
     308||'''Author(s)'''||H.H. Bui, D. Kieronska, S. Venkatesh||
     309||'''Cited'''||29||
     310||'''Subject(s)'''||||
     311||'''Summary'''||||
     312||'''Relevance'''||||
     313||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:zJTcBPpxfYoJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     314||'''Cites seen'''||Yes||
     315[[BR]]
     316
     317||'''Title'''||Learning to Select Negotiation Strategies in Multi-Agent Meeting Scheduling||
     318||'''Author(s)'''||E. Crawford and M. Veleso||
     319||'''Cited'''||21||
     320||'''Subject(s)'''||||
     321||'''Summary'''||||
     322||'''Relevance'''||||
     323||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:PfWgZlbnox8J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     324||'''Cites seen'''||Yes||
     325[[BR]]
     326
     327||'''Title'''||Modelling Agents Behaviour in Automated Negotiation||
     328||'''Author(s)'''||C. Hou||
     329||'''Cited'''||10||
     330||'''Subject(s)'''||||
     331||'''Summary'''||||
     332||'''Relevance'''||||
     333||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:OznI0-O4SlgJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     334||'''Cites seen'''||Yes||
     335[[BR]]
     336
     337||'''Title'''||Modeling Opponent Decision in Repeated One-shot Negotiations||
     338||'''Author(s)'''||S.Saha, A. Biswas, S. Sen||
     339||'''Cited'''||26||
     340||'''Subject(s)'''||||
     341||'''Summary'''||||
     342||'''Relevance'''||||
     343||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:HWyIIq6nlNUJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     344||'''Cites seen'''||Yes||
     345[[BR]]
     346
     347||'''Title'''||Negotiating agents that learn about others' preferences||
     348||'''Author(s)'''||H.H. Bui, D. Kieronska and S. Venkatesh||
     349||'''Cited'''||5||
     350||'''Subject(s)'''||Logic-like representation negotiation model, bin-based opponent model, one-issue continious||
     351||'''Summary'''||The used method assumes that domain knowledge is available to partition the search space. Each turns, the agents communicate [[br]]the space where an agreement is possible. Each turn there is a negotiation between all agents to find a common space, which [[br]]means that the agent recommunicate a refined space of agreement until an agreement is reached. The proces continues until [[br]]a common decision is found (a decision is an element in the space of agreement). A learning algorithm can be used as follows: [[br]]first the full domain space is split into zones, which are allocated a uniform chance. This chance is updated for each region [[br]]for each agent based on the received space of agreement. When agents do not agree about the space, then a the space is chosen [[br]]which has the maximum support based on the chances of each space for each agent. This leads to a higher chance of agreement.
     352||
     353||'''Relevance'''||3, domain knowledge required and only considers one issue||
     354||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:8EOwrOyBdv0J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     355||'''Cites seen'''||Yes||
     356[[BR]]
     357
     358||'''Title'''||Negotiation Decision Functions for Autonomous Agent||
     359||'''Author(s)'''||P. Faratin, C. Sierra, N.R. Jennings||
     360||'''Cited'''||718||
     361||'''Subject(s)'''||||
     362||'''Summary'''||||
     363||'''Relevance'''||||
     364||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:Pmj4ztkTFq4J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     365||'''Cites seen'''||Yes||
     366[[BR]]
     367
     368||'''Title'''||Negotiation Dynamics: Analysis, Concession Tactics, and Outcomes||
     369||'''Author(s)'''||K. Hindriks, C.M. Jonker, D. Tykhonov||
     370||'''Cited'''||7||
     371||'''Subject(s)'''||||
     372||'''Summary'''||||
     373||'''Relevance'''||||
     374||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:8lUoyWRsIMMJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     375||'''Cites seen'''||Yes||
     376[[BR]]
     377
     378||'''Title'''||On-Line Incremental Learning in Bilateral Multi-Issue Negotiation||
     379||'''Author(s)'''||V. Soo, C. Hung||
     380||'''Cited'''||18||
     381||'''Subject(s)'''||Online incremental learning using neural networks||
     382||'''Summary'''||This paper discusses using neural networks for learning the opponent model, however it is not described how, and the[[BR]] results are not promosing. By limiting the amount exchanges, opponent models become more important, and [[BR]]lead to beter outcomes.
     383||
     384||'''Relevance'''||2, since the paper is not specific enough||
     385||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:G3nExE5HdskJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     386||'''Cites seen'''||Yes||
     387[[BR]]
     388
     389||'''Title'''||On Learning Negotiation Strategies by Artificial Adaptive Agents in Environments of Incomplete Information||
     390||'''Author(s)'''||J.R. Oliver||
     391||'''Cited'''||6||
     392||'''Subject(s)'''||||
     393||'''Summary'''||||
     394||'''Relevance'''||||
     395||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:zk0BU_aG2ZIJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     396||'''Cites seen'''||Yes||
     397[[BR]]
     398
     399||'''Title'''||Opponent Model Estimation in Bilateral Multi-issue Negotiation||
     400||'''Author(s)'''||N. van Galen Last||
     401||'''Cited'''||-||
     402||'''Subject(s)'''||Agent which participated in ANAC2010||
     403||'''Summary'''||Overall not interesting, but encouraged me to find fields involved in negotiation.||
     404||'''Relevance'''||2||
     405||'''Bibtex'''||X||
     406||'''Cites seen'''||Yes||
     407||'''Processed'''||Yes||
     408[[BR]]
     409
     410||'''Title'''||Opponent Modelling in Automated Multi-Issue Negotiation Using Bayesian Learning||
     411||'''Author(s)'''||K. Hindriks, D. Tykhonov||
     412||'''Cited'''||33||
     413||'''Subject(s)'''||||
     414||'''Summary'''||||
     415||'''Relevance'''||||
     416||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:BtssqMir4RcJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     417||'''Cites seen'''||Yes||
     418[[BR]]
     419
     420||'''Title'''||Optimal negotiation strategies for agents with incomplete information||
     421||'''Author(s)'''||S.S. Fatima, M. Wooldridge and N.R. Jennings||
     422||'''Cited'''||88||
     423||'''Subject(s)'''||||
     424||'''Summary'''||||
     425||'''Relevance'''||||
     426||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:zfXBf6ObIaEJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     427||'''Cites seen'''||Yes||
     428[[BR]]
     429
     430||'''Title'''||Predicting Agents Tactics in Automated Negotiation||
     431||'''Author(s)'''||C. Hou||
     432||'''Cited'''||12||
     433||'''Subject(s)'''||||
     434||'''Summary'''||||
     435||'''Relevance'''||||
     436||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:4CP7PDlOJL8J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     437||'''Cites seen'''||Yes||
     438[[BR]]
     439
     440||'''Title'''||Predicting partner's behaviour in agent negotiation||
     441||'''Author(s)'''||J. Brzostowski, R. Kowalczyk||
     442||'''Cited'''||16||
     443||'''Subject(s)'''||||
     444||'''Summary'''||||
     445||'''Relevance'''||||
     446||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:3MifmJepz_4J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     447||'''Cites seen'''||Yes||
     448[[BR]]
     449
     450
     451||'''Title'''||The Benefits of Opponent Models in Negotiation||
     452||'''Author(s)'''||K. Hindriks, C.M. Jonker, D. Tykhonov||
     453||'''Cited'''||-||
     454||'''Subject(s)'''||Nice Mirroring Strategy using Bayesian Learning||
     455||'''Summary'''||Opponent models can aid in preventing exploitation, by determining the type of move of the opponent (selfish, [[BR]] unfortunate, concession, nice, fortunate, silent), and by taking the opponents preferences into account to[[BR]] increase the chance of acceptation. The mirror strategy mirrors the behaviours of the opponent, based on a [[BR]]classification of the opponent move. Nice MS does the same, but adds a nice move, which is a move which only [[BR]]increases the opponents utility without decreasing ours. Overall the strategy is shown to be effective by [[BR]]comparing the result of first testing the strategy against a random agent, and then the other agents. Also, the[[BR]] distance to a Kalai-Smorodinsky solution and the distance to the Nash Point is used as a metric. For [[BR]]future work the exploitability of MS should be researched.||
     456||'''Relevance'''||8, interesting application of opponent modelling ||
     457||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:BtssqMir4RcJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=2 Link]||
     458||'''Cites seen'''||Yes||
     459||'''Processed'''||Yes||
     460[[BR]]
     461
     462||'''Title'''||The First Automated Negotiating Agents Competition (ANAC 2010)||
     463||'''Author(s)'''||T. Baarslag, K. Hindriks, C. Jonker, S. Kraus, R. Lin||
     464||'''Cited'''||-||
     465||'''Subject(s)'''||ANAC, overview multiple agents, opponent models, acceptance conditions||
     466||'''Summary'''||The ANAC competition models bilateral multi-issue closed negotiations and provides a benchmark for negotiation agents. [[br]]Opponent models can also be used to identify the type of strategy of the opponent. Interesting agents for further analysis [[br]]are: IAM(crazy)Haggler, FSEGA (profile learning), and Agent Smith. Issues can be predicatable, which means that they [[br]]have a logical order, or unpredicatable, such as colors. This paper also includes acceptance conditions.||
     467||'''Relevance'''||5, too global, however interesting citations||
     468||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:vKSG_Lm38D0J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=3 Link]||
     469||'''Cites seen'''||Yes||
     470[[BR]]
     471
     472||'''Title'''||Towards a Quality Assessment Method for Learning Preference Profiles in Negotiation||
     473||'''Author(s)'''||K.V. Hindriks and D. Tykhonov||
     474||'''Cited'''||6||
     475||'''Subject(s)'''||Measures for quality of opponent model||
     476||'''Summary'''||See section on quality measures in paper||
     477||'''Relevance'''||9||
     478||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:oxFpfvvuE94J:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     479||'''Cites seen'''||Yes||
     480||'''Processed'''||Yes||
     481[[BR]]
     482
     483||'''Title'''||Using Similarity Criteria to Make Issue Trade-offs in Automated Negotiations||
     484||'''Author(s)'''||P. Faratin, C. Sierra, N.R. Jennings||
     485||'''Cited'''||367||
     486||'''Subject(s)'''||||
     487||'''Summary'''||||
     488||'''Relevance'''||||
     489||'''Bibtex'''||[http://scholar.google.nl/scholar.bib?q=info:xXqv_X0tP9MJ:scholar.google.com/&output=citation&hl=nl&as_sdt=0,5&ct=citation&cd=0 Link]||
     490||'''Cites seen'''||Yes||
     491[[BR]]
     492
     493||'''Title'''||Yushu: a Heuristic-Based Agent for Automated Negotiating Competition||
     494||'''Author(s)'''||B. An and V. Lesser||
     495||'''Cited'''||-||
     496||'''Subject(s)'''||ANAC agent, Complexity learning||
     497||'''Summary'''||One of the interesting strategies of Yushu is that it tries to measure the competitiveness, which influences it's bidding strategy.[[BR]] Details can not be found in their paper, however this is not relevant for this survey. They also measures time by averaging [[BR]] over all bids. This is used to determine when to accept in panic.||
     498||'''Relevance'''||4, only finding the amount of rounds is interesting, but obvious||
     499||'''Bibtex'''||X||
     500||'''Cites seen'''||Yes||
     501||'''Processed'''||Yes||
     502[[BR]]